url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://geolis.math.tecnico.ulisboa.pt/seminars?action=show&id=6338 | ## 07/09/2021, Tuesday, 16:30–17:30 Europe/Lisbon — Online
Daniele Alessandrini, Columbia University
The nilpotent cone in rank one and minimal surfaces
I will describe two interesting and closely related moduli spaces: the nilpotent cone in the moduli spaces of Higgs bundles for $\operatorname{SL}_2(\mathbb C)$ and $\operatorname{PSL}_2(\mathbb C)$, and the moduli space of equivariant minimal surfaces in the hyperbolic 3-space.
A deep understanding of these objects is important because of their relations with several fundamental constructions in geometry: singular fibers of the Hitchin fibration, branes, mirror symmetry, branched hyperbolic structures, minimal surfaces in hyperbolic 3-manifolds and so on.
A stratification of the nilpotent cone is well known and was rediscovered by many people. The closures of the strata are the irreducible components of the nilpotent cone. The talk will focus on describing the intersections between the different irreducible components.
This is joint work with Qiongling Li and Andrew Sanders. | 2022-01-20 01:46:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8741734027862549, "perplexity": 594.4991801632048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301670.75/warc/CC-MAIN-20220120005715-20220120035715-00131.warc.gz"} |
https://benanne.github.io/2020/09/01/typicality.html | # Musings on typicality
## September 01, 2020
If you’re training or sampling from generative models, typicality is a concept worth understanding. It sheds light on why beam search doesn’t work for autoregressive models of images, audio and video; why you can’t just threshold the likelihood to perform anomaly detection with generative models; and why high-dimensional Gaussians are “soap bubbles”. This post is a summary of my current thoughts on the topic.
First, some context: one of the reasons I’m writing this, is to structure my own thoughts about typicality and the unintuitive behaviour of high-dimensional probability distributions. Most of these thoughts have not been empirically validated, and several are highly speculative and could be wrong. Please bear this in mind when reading, and don’t hesitate to use the comments section to correct me. Another reason is to draw more attention to the concept, as I’ve personally found it extremely useful to gain insight into the behaviour of generative models, and to correct some of my flawed intuitions. I tweeted about typicality a few months ago, but as it turns out, I have a lot more to say on the topic!
As with most of my blog posts, I will assume a degree of familiarity with machine learning. For certain parts, some knowledge of generative modelling is probably useful as well. Section 3 of my previous blog post provides an overview of generative models.
Overview (click to scroll to each section):
## The joys of likelihood
When it comes to generative modelling, my personal preference for the likelihood-based paradigm is no secret (my recent foray into adversarial methods for text-to-speech notwithstanding). While there are many other ways to build and train models (e.g. using adversarial networks, score matching, optimal transport, quantile regression, … see my previous blog post for an overview), there is something intellectually pleasing about the simplicity of maximum likelihood training: the model explicitly parameterises a probability distribution, and we fit the parameters of that distribution so it is able to explain the observed data as well as possible (i.e., assigns to it the highest possible likelihood).
It turns out that this is far from the whole story, and higher likelihood’ doesn’t always mean better in a way that we actually care about. In fact, the way likelihood behaves in relation to the quality of a model as measured by humans (e.g. by inspecting samples) can be deeply unintuitive. This has been well-known in the machine learning community for some time, and Theis et al.’s A note on the evaluation of generative models1 does an excellent job of demonstrating this with clever thought experiments and concrete examples. In what follows, I will expound on what I think is going on when likelihoods disagree with our intuitions.
One particular way in which a higher likelihood can correspond to a worse model is through overfitting on the training set. Because overfitting is ubiquitous in machine learning research, the unintuitive behaviours of likelihood are often incorrectly ascribed to this phenomenon. In this post, I will assume that overfitting is not an issue, and that we are talking about properly regularised models trained on large enough datasets.
## Motivating examples
### Unfair coin flips
Jessica Yung has a great blog post that demonstrates how even the simplest of probability distributions start behaving in unintuitive ways in higher-dimensional spaces, and she links this to the concept of typicality. I will borrow her example here and expand on it a bit, but I recommend reading the original post.
To summarise: suppose you have an unfair coin that lands on heads 3 times out of 4. If you toss this coin 16 times, you would expect to see 12 heads (H) and 4 tails (T) on average. Of course you wouldn’t expect to see exactly 12 heads and 4 tails every time: there’s a pretty good chance you’d see 13 heads and 3 tails, or 11 heads and 5 tails. Seeing 16 heads and no tails would be quite surprising, but it’s not implausible: in fact, it will happen about 1% of the time. Seeing all tails seems like it would be a miracle. Nevertheless, each coin toss is independent, so even this has a non-zero probability of being observed.
When we count the number of heads and tails in the observed sequence, we’re looking at the binomial distribution. We’ve made the implicit assumption that what we care about is the frequency of occurrence of both outcomes, and not the order in which they occur. We’ve made abstraction of the order, and we are effectively treating the sequences as unordered sets, so that HTHHTHHHHTTHHHHH and HHHHHTHTHHHTHTHH are basically the same thing. That is often desirable, but it’s worth being aware of such assumptions, and making them explicit.
If we do not ignore the order, and ask which sequence is the most likely, the answer is ‘all heads’. That may seem surprising at first, because seeing only heads is a relatively rare occurrence. But note that we’re asking a different question here, about the ordered sequences themselves, rather than about their statistics. While the difference is pretty clear here, the implicit assumptions and abstractions that we tend to use in our reasoning are often more subtle.
The table and figure below show how the probability of observing a given number of heads and tails can be found by multiplying the probability of a particular sequence with the number of such sequences. Note that ‘all heads’ has the highest probability out of all sequences (bolded), but there is only a single such sequence. The most likely number of heads we’ll observe is 12 (also bolded): even though each individual sequence with 12 heads is less likely, there are a lot more of them, and this second factor ends up dominating.
#H #T p(sequence) # sequences p(#H, #T)
0 16 $$\left(\frac{3}{4}\right)^0 \left(\frac{1}{4}\right)^{16} = 2.33 \cdot 10^{-10}$$ 1 $$2.33\cdot 10^{-10}$$
1 15 $$\left(\frac{3}{4}\right)^1 \left(\frac{1}{4}\right)^{15} = 6.98 \cdot 10^{-10}$$ 16 $$1.12\cdot 10^{-8}$$
2 14 $$\left(\frac{3}{4}\right)^2 \left(\frac{1}{4}\right)^{14} = 2.10 \cdot 10^{-9}$$ 120 $$2.51\cdot 10^{-7}$$
3 13 $$\left(\frac{3}{4}\right)^3 \left(\frac{1}{4}\right)^{13} = 6.29 \cdot 10^{-9}$$ 560 $$3.52\cdot 10^{-6}$$
4 12 $$\left(\frac{3}{4}\right)^4 \left(\frac{1}{4}\right)^{12} = 1.89 \cdot 10^{-8}$$ 1820 $$3.43\cdot 10^{-5}$$
5 11 $$\left(\frac{3}{4}\right)^5 \left(\frac{1}{4}\right)^{11} = 5.66 \cdot 10^{-8}$$ 4368 $$2.47\cdot 10^{-4}$$
6 10 $$\left(\frac{3}{4}\right)^6 \left(\frac{1}{4}\right)^{10} = 1.70 \cdot 10^{-7}$$ 8008 $$1.36\cdot 10^{-3}$$
7 9 $$\left(\frac{3}{4}\right)^7 \left(\frac{1}{4}\right)^9 = 5.09 \cdot 10^{-7}$$ 11440 $$5.83\cdot 10^{-3}$$
8 8 $$\left(\frac{3}{4}\right)^8 \left(\frac{1}{4}\right)^8 = 1.53 \cdot 10^{-6}$$ 12870 $$1.97\cdot 10^{-2}$$
9 7 $$\left(\frac{3}{4}\right)^9 \left(\frac{1}{4}\right)^7 = 4.58 \cdot 10^{-6}$$ 11440 $$5.24\cdot 10^{-2}$$
10 6 $$\left(\frac{3}{4}\right)^{10} \left(\frac{1}{4}\right)^6 = 1.37 \cdot 10^{-5}$$ 8008 $$1.10\cdot 10^{-1}$$
11 5 $$\left(\frac{3}{4}\right)^{11} \left(\frac{1}{4}\right)^5 = 4.12 \cdot 10^{-5}$$ 4368 $$1.80\cdot 10^{-1}$$
12 4 $$\left(\frac{3}{4}\right)^{12} \left(\frac{1}{4}\right)^4 = 1.24 \cdot 10^{-4}$$ 1820 $$\mathbf{2.25\cdot 10^{-1}}$$
13 3 $$\left(\frac{3}{4}\right)^{13} \left(\frac{1}{4}\right)^3 = 3.71 \cdot 10^{-4}$$ 560 $$2.08\cdot 10^{-1}$$
14 2 $$\left(\frac{3}{4}\right)^{14} \left(\frac{1}{4}\right)^2 = 1.11 \cdot 10^{-3}$$ 120 $$1.34\cdot 10^{-1}$$
15 1 $$\left(\frac{3}{4}\right)^{15} \left(\frac{1}{4}\right)^1 = 3.33 \cdot 10^{-3}$$ 16 $$5.35\cdot 10^{-2}$$
16 0 $$\left(\frac{3}{4}\right)^{16} \left(\frac{1}{4}\right)^0 = \mathbf{1.00 \cdot 10^{-2}}$$ 1 $$1.00\cdot 10^{-2}$$
import matplotlib.pyplot as plt
import numpy as np
import scipy.special
h = np.arange(16 + 1)
p_sequence = (3/4)**h * (1/4)**(16 - h)
num_sequences = scipy.special.comb(16, h)
plt.figure(figsize=(9, 3))
plt.plot(h, p_sequence, 'C0-s',
label='probability of a single sequence with this number of heads')
label='probability of observing this number of heads')
plt.yscale('log')
plt.ylabel('probability')
plt.legend()
### Gaussian soap bubbles
Another excellent blog post about the unintuitive behaviour of high-dimensional probability distributions is Ferenc Huszar’s ‘Gaussian Distributions are Soap Bubbles’. A one-dimensional Gaussian looks like bell curve: a big bump around the mode, with a tail on either side. Clearly, the bulk of the total probability mass is clumped together around the mode. In higher-dimensional spaces, this shape changes completely: the bulk of the probability mass of a spherical Gaussian distribution with unit variance in $$K$$ dimensions is concentrated in a thin ‘shell’ at radius $$\sqrt{K}$$. This is known as the Gaussian annulus theorem.
For example, if we sample lots of vectors from a 100-dimensional standard Gaussian, and measure their radii, we will find that just over 84% of them are between 9 and 11, and more than 99% are between 8 and 12. Only about 0.2% have a radius smaller than 8!
Ferenc points out an interesting implication: high-dimensional Gaussians are very similar to uniform distributions on the sphere. This clearly isn’t true for the one-dimensional case, but it turns out that’s an exception, not the rule. Stefan Stein also discusses this implication in more detail in a recent blog post.
Where our intuition can go wrong here, is that we might underestimate how quickly a high-dimensional space grows in size as we move further away from the mode. Because of the radial symmetry of the distribution, we tend to think of all points at a given distance from the mode as similar, and we implicitly group them into sets of concentric spheres. This allows us to revert back to reasoning in one dimension, which we are more comfortable with: we think of a high-dimensional Gaussian as a distribution over these sets, rather than over individual points. What we tend to overlook, is that those sets differ wildly in size: as we move away from the mode, they grow larger very quickly. Note that this does not happen at all in 1D!
## Abstraction and the curse of dimensionality
The curse of dimensionality is a catch-all term for various phenomena that appear very different and often counterintuitive in high-dimensional spaces. It is used to highlight poor scaling behaviour of ideas and algorithms, where one wouldn’t necessarily expect it. In the context of machine learning, it is usually used in a more narrow sense, to refer to the fact that models of high-dimensional data tend to require very large training datasets to be effective. But the curse of dimensionality manifests itself in many forms, and the unintuitive behaviour of high-dimensional probability distributions is just one of them.
In general, humans have lousy intuitions about high-dimensional spaces. But what exactly is going on when we get things wrong about high-dimensional distributions? In both of the motivating examples, the intuition breaks down in a similar way: if we’re not careful, we might implicitly reason about the probabilities of sets, rather than individual points, without taking into account their relative sizes, and arrive at the wrong answer. This means that we can encounter this issue for both discrete and continuous distributions.
We can generalise this idea of grouping points into sets of similar points, by thinking of it as ‘abstraction’: rather than treating each point as a separate entity, we think of it as an instance of a particular concept, and ignore its idiosyncrasies. When we think of ‘sand’, we are rarely concerned about the characteristics of each individual grain. Similarly, in the ‘unfair coin flips’ example, we group sequences by their number of heads and tails, ignoring their order. In the case of the high-dimensional Gaussian, the natural grouping of points is based on their Euclidean distance from the mode. A more high-level example is that of natural images, where individual pixel values across localised regions of the image combine to form edges, textures, or even objects. There are usually many combinations of pixel values that give rise to the same texture, and we aren’t able to visually distinguish these particular instances unless we carefully study them side by side.
The following is perhaps a bit of an unfounded generalisation based on my own experience, but our brains seem hardwired to perform this kind of abstraction, so that we can reason about things in the familiar low-dimensional setting. It seems to happen unconsciously and continuously, and bypassing it requires a proactive approach.
## Typicality
Informally, typicality refers to the characteristics that samples from a distribution tend to exhibit on average (in expectation). In the ‘unfair coin flip’ example, a sequence with 12 heads and 4 tails is ‘typical’. A sequence with 6 heads and 10 tails is highly atypical. Typical sequences contain an average amount of information: they are not particularly surprising or (un)informative.
We can formalise this intuition using the entropy of the distribution: a typical set $$\mathcal{T}_\varepsilon \subset \mathcal{X}$$ is a set of sequences from $$\mathcal{X}$$ whose probability is close to $$2^{-H}$$, where $$H$$ is the entropy of the distribution that the sequences were drawn from, measured in bits:
$\mathcal{T}_\varepsilon = \{ \mathbf{x} \in \mathcal{X}: 2^{-(H + \varepsilon)} \leq p(\mathbf{x}) \leq 2^{-(H - \varepsilon)} \} .$
This means that the negative log likelihood of each such sequence is close to the entropy. Note that a distribution doesn’t have just one typical set: we can define many typical sets based on how close the probability of the sequences contained therein should be to $$2^{-H}$$, by choosing different values of $$\varepsilon > 0$$.
This concept was originally defined in an information-theoretic context, but I want to focus on machine learning, where I feel it is somewhat undervalued. It is often framed in terms of sequences sampled from stationary ergodic processes, but it is useful more generally for distributions of any kind of high-dimensional data points, both continuous and discrete, regardless of whether we tend to think of them as sequences.
Why is this relevant to our discussion of abstraction and flawed human intuitions? As the dimensionality increases, the probability that any random sample from a distribution is part of a given typical set $$\mathcal{T}_\varepsilon$$ tends towards 1. In other words, randomly drawn samples will almost always be ‘typical’, and the typical set covers most of the support of the distribution (this is a consequence of the so-called asymptotic equipartition property (AEP)). This happens even when $$\varepsilon$$ is relatively small, as long as the dimensionality is high enough. This is visualised for a 100-dimensional standard Gaussian distribution below (based on empirical measurements, to avoid having to calculate some gnarly 100D integrals).
import matplotlib.pyplot as plt
import numpy as np
N = 1000000
K = 100
samples = np.random.normal(0, 1, (N, K))
epsilon = np.logspace(-1, 2, 200)
lo = np.sqrt(np.maximum(K - epsilon * np.log(4), 0))
hi = np.sqrt(K + epsilon * np.log(4))
mass = [np.mean((lo[i] < radii) & (radii < hi[i])) for i in range(len(epsilon))]
plt.figure(figsize=(9, 3))
plt.xlabel('Difference between the min. and max. radii inside '
'$\\mathcal{T}_\\varepsilon$ for given $\\varepsilon$')
plt.ylabel('Total probability mass in $\\mathcal{T}_\\varepsilon$')
But this is where it gets interesting: for unimodal high-dimensional distributions, such as the multivariate Gaussian, the mode (i.e. the most likely value) usually isn’t part of the typical set. More generally, individual samples from high-dimensional (and potentially multimodal) distributions that have an unusually high likelihood are not typical, so we wouldn’t expect to see them when sampling. This can seem paradoxical, because they are by definition very ‘likely’ samples — it’s just that there are so few of them! Think about how surprising it would be to randomly sample the zero vector (or something very close to it) from a 100-dimensional standard Gaussian distribution.
This has some important implications: if we want to learn more about what a high-dimensional distribution looks like, studying the most likely samples is usually a bad idea. If we want to obtain a good quality sample from a distribution, subject to constraints, we should not be trying to find the single most likely one. Yet in machine learning, these are things that we do on a regular basis. In the next section, I’ll discuss a few situations where this paradox comes up in practice. For a more mathematical treatment of typicality and the curse of dimensionality, check out this case study by Bob Carpenter.
## Typicality in the wild
A significant body of literature, spanning several subfields of machine learning, has sought to interpret and/or mitigate the unintuitive ways in which high-dimensional probability distributions behave. In this section, I want to highlight a few interesting papers and discuss them in relation to the concept of typicality. Note that I’ve made a selection based on what I’ve read recently, and this is not intended to be a comprehensive overview of the literature. In fact, I would appreciate pointers to other related work (papers and blog posts) that I should take a look at!
### Language modelling
In conditional language modelling tasks, such as machine translation or image captioning, it is common to use conditional autoregressive models in combination with heuristic decoding strategies such as beam search. The underlying idea is that we want to find the most likely sentence (i.e. the mode of the conditional distribution, ‘MAP decoding’), but since this is intractable, we’ll settle for an approximate result instead.
With typicality in mind, it’s clear that this isn’t necessarily the best idea. Indeed, researchers have found that machine translation results, measured using the BLEU metric, sometimes get worse when the beam width is increased2 3. A higher beam width gives a better, more computationally costly approximation to the mode, but not necessarily better translation results. In this case, it’s tempting to blame the metric itself, which obviously isn’t perfect, but this effect has also been observed with human ratings4, so that cannot be the whole story.
A recent paper by Eikema & Aziz5 provides an excellent review of recent work in this space, and makes a compelling argument for MAP decoding as the culprit behind many of the pathologies that neural machine translation systems exhibit (rather than their network architectures or training methodologies). They also propose an alternative decoding strategy called ‘minimum Bayes risk’ (MBR) decoding that takes into account the whole distribution, rather than only the mode.
In unconditional language modelling, beam search hasn’t caught on, but not for want of trying! Stochasticity of the result is often desirable in this setting, and the focus has been on sampling strategies instead. In The Curious Case of Neural Text Degeneration6, Holtzman et al. observe that maximising the probability leads to poor quality results that are often repetitive. Repetitive samples may not be typical, but they have high likelihoods simply because they are more predictable.
They compare a few different sampling strategies that interpolate between fully random sampling and greedy decoding (i.e. predicting the most likely token at every step in the sequence), including the nucleus sampling technique which they propose. The motivation for trying to find a middle ground is that models will assign low probabilities to sequences that they haven’t seen much during training, which makes low-probability predictions inherently less reliable. Therefore, we want to avoid sampling low-probability tokens to some extent.
Zhang et al.4 frame the choice of a language model decoding strategy as a trade-off between diversity and quality. However, they find that reducing diversity only helps quality up to a point, and reducing it too much makes the results worse, as judged by human evaluators. They call this ‘the likelihood trap’: human-judged quality of samples correlates very well with likelihood, up to an inflection point, where the correlation becomes negative.
In the context of typicality, this raises an interesting question: where exactly is this inflection point, and how does it relate to the typical set of the model distribution? I think it would be very interesting to determine whether the inflection point coincides exactly with the typical set, or whether it is more/less likely. Perhaps there is some degree of atypicality that human raters will tolerate? If so, can we quantify it? This wouldn’t be far-fetched: think about our preference for celebrity faces over ‘typical’ human faces, for example!
### Image modelling
The previously mentioned ‘note on the evaluation of generative models’1 is a seminal piece of work that demonstrates several ways in which likelihoods in the image domain can be vastly misleading.
In ‘Do Deep Generative Models Know What They Don’t Know?’7, Nalisnick et al. study the behaviour of likelihood-based models when presented with out-of-domain data. They observe how models can assign higher likelihoods to datasets other than their training datasets. Crucially, they show this for different classes of likelihood-based models (variational autoencoders, autoregressive models and flow-based models, see Figure 3 in the paper), which clearly demonstrates that this is an issue with the likelihood-based paradigm itself, and not with a particular model architecture or formulation.
Comparing images from CIFAR-10 and SVHN, two of the datasets they use, a key difference is the prevalence of textures in CIFAR-10 images, and the relative absence of such textures in SVHN images. This makes SVHN images inherently easier to predict, which partially explains why models trained on CIFAR-10 tend to assign higher likelihoods to SVHN images. Despite this, we clearly wouldn’t ever be able to sample anything that looks like an SVHN image from a CIFAR-10-trained model, because such images are not in the typical set of the model distribution (even if their likelihood is higher).
### Audio modelling
I don’t believe I’ve seen any recent work that studies sampling and decoding strategies for likelihood-based models in the audio domain. Nevertheless, I wanted to briefly discuss this setting because a question I often get is: “why don’t you use greedy decoding or beam search to improve the quality of WaveNet samples?”
If you’ve read this far, the answer is probably clear to you by now: because audio samples outside of the typical set sound really weird! In fact, greedy decoding from a WaveNet will invariably yield complete silence, even for fairly strongly conditioned models (e.g. WaveNets for text-to-speech synthesis). In the text-to-speech case, even if you simply reduce the sampling temperature a bit too aggressively, certain consonants that are inherently noisy (such as ‘s’, ‘f’, ‘sh’ and ‘h’, the fricatives) will start sounding very muffled. These sounds are effectively different kinds of noise, and reducing the stochasticity of this noise has an audible effect.
### Anomaly detection
Anomaly detection, or out-of-distribution (OOD) detection, is the task of identifying whether a particular input could have been drawn from a given distribution. Generative models are often used for this purpose: train an explicit model on in-distribution data, and then use its likelihood estimates to identify OOD inputs.
Usually, the assumption is made that OOD inputs will have low likelihoods, and in-distribution inputs will have high likelihoods. However, the fact that the mode of a high-dimensional distribution usually isn’t part of its typical set clearly contradicts this. This mistaken assumption is quite pervasive. Only recently has it started to be challenged explicitly, e.g. in works by Nalisnick et al.8 and Morningstar et al.9. Both of these works propose testing the typicality of inputs, rather than simply measuring and thresholding their likelihood.
## The right level of abstraction
While our intuitive notion of likelihood in high-dimensional spaces might technically be wrong, it can often be a better representation of what we actually care about. This raises the question: should we really be fitting our generative models using likelihood measured in the input space? If we were to train likelihood-based models with ‘intuitive’ likelihood, they might perform better according to perceptual metrics, because they do not have to waste capacity capturing all the idiosyncrasies of particular examples that we don’t care to distinguish anyway.
In fact, measuring likelihood in more abstract representation spaces has had some success in generative modelling, and I think the approach should be taken more seriously in general. In language modelling, it is common to measure likelihoods at the level of word pieces, rather than individual characters. In symbolic music modelling, recent models that operate on event-based sequences (rather than sequences with a fixed time quantum) are more effective at capturing large-scale structure10. Some likelihood-based generative models of images separate or discard the least-significant bits of each pixel colour value, because they are less perceptually relevant, allowing model capacity to be used more efficiently11 12.
But perhaps the most striking example is the recent line of work where VQ-VAE13 is used to learn discrete higher-level representations of perceptual signals, and generative models are then trained to maximise the likelihood in this representation space. This approach has led to models that produce images that are on par with those produced by GANs in terms of fidelity, and exceed them in terms of diversity14 15 16. It has also led to models that are able to capture long-range temporal structure in audio signals, which even GANs had not been able to do before17 18. While the current trend in representation learning is to focus on coarse-grained representations which are suitable for discriminative downstream tasks, I think it also has a very important role to play in generative modelling.
In the context of modelling sets with likelihood-based models, a recent blog post by Adam Kosiorek drew my attention to point processes, and in particular, to the formula that expresses the density over ordered sequences in terms of the density over unordered sets. This formula quantifies how we need to scale probabilities across sets of different sizes to make them comparable. I think it may yet prove useful to quantify the unintuitive behaviours of likelihood-based models.
## Closing thoughts
To wrap up this post, here are some takeaways:
• High-dimensional spaces, and high-dimensional probability distributions in particular, are deeply unintuitive in more ways than one. This is a well-known fact, but they still manage to surprise us sometimes!
• The most likely samples from a high-dimensional distribution usually aren’t a very good representation of that distribution. In most situations, we probably shouldn’t be trying to find them.
• Typicality is a very useful concept to describe these unintuitive phenomena, and I think it is undervalued in machine learning — at least in the work that I’ve been exposed to.
• A lot of work that discusses these issues (including some that I’ve highlighted in this post) doesn’t actually refer to typicality by name. I think doing so would improve our collective understanding, and shed light on links between related phenomena in different subfields.
In an addendum to this post, I explore quantitatively what happens when our intuitions fail us in high-dimensional spaces.
If you would like to cite this post in an academic context, you can use this BibTeX snippet:
@misc{dieleman2020typicality,
author = {Dieleman, Sander},
title = {Musings on typicality},
url = {https://benanne.github.io/2020/09/01/typicality.html},
year = {2020}
}
## Acknowledgements
Thanks to Katie Millican, Jeffrey De Fauw and Adam Kosiorek for their valuable input and feedback on this post!
## References
1. Theis, van den Oord and Bethge, “A note on the evaluation of generative models”, International Conference on Learning Representations, 2016. 2
2. Koehn & Knowles, “Six Challenges for Neural Machine Translation”, First Workshop on Neural Machine Translation, 2017.
3. Ott, Auli, Grangier and Ranzato, “Analyzing Uncertainty in Neural Machine Translation”, International Conference on Machine Learning, 2018.
4. Zhang, Duckworth, Ippolito and Neelakantan, “Trading Off Diversity and Quality in Natural Language Generation”, arXiv, 2020. 2
5. Eikema and Aziz, “Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation”, arXiv, 2020.
6. Holtzman, Buys, Du, Forbes and Choi, “The Curious Case of Neural Text Degeneration”, International Conference on Learning Representations, 2020.
7. Nalisnick, Matsukawa, Teh, Gorur and Lakshminarayanan, “Do Deep Generative Models Know What They Don’t Know?”, International Conference on Learnign Representations, 2019.
8. Nalisnick, Matuskawa, Teh and Lakshminarayanan, “Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality”, arXiv, 2019.
9. Morningstar, Ham, Gallagher, Lakshminarayanan, Alemi and Dillon, “Density of States Estimation for Out-of-Distribution Detection”, arXiv, 2020.
10. Oore, Simon, Dieleman, Eck and Simonyan, “This Time with Feeling: Learning Expressive Musical Performance”, Neural Computing and Applications, 2020.
11. Menick and Kalchbrenner, “Generating High Fidelity Images with Subscale Pixel Networks and Multidimensional Upscaling”, International Conference on Machine Learning, 2019.
12. Kingma & Dhariwal, “Glow: Generative flow with invertible 1x1 convolutions”, Neural Information Processing Systems, 2018.
13. van den Oord, Vinyals and Kavukcuoglu, “Neural Discrete Representation Learning”, Neural Information Processing Systems, 2017.
14. Razavi, van den Oord and Vinyals, “Generating Diverse High-Fidelity Images with VQ-VAE-2”, Neural Information Processing Systems, 2019.
15. De Fauw, Dieleman and Simonyan, “Hierarchical Autoregressive Image Models with Auxiliary Decoders”, arXiv, 2019.
16. Ravuri and Vinyals, “Classification Accuracy Score for Conditional Generative Models”, Neural Information Processing Systems, 2019.
17. Dieleman, van den Oord and Simonyan, “The challenge of realistic music generation: modelling raw audio at scale”, Neural Information Processing Systems, 2018.
18. Dhariwal, Jun, Payne, Kim, Radford and Sutskever, “Jukebox: A Generative Model for Music”, arXiv, 2020.
### Diffusion language models
Diffusion models have completely taken over generative modelling of perceptual signals -- why is autoregression still the name of the game for language modelling? Can we do anything about that? Continue reading
#### Guidance: a cheat code for diffusion models
Published on May 26, 2022
#### Diffusion models are autoencoders
Published on January 31, 2022 | 2023-01-29 22:46:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6590996384620667, "perplexity": 890.290518070741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499768.15/warc/CC-MAIN-20230129211612-20230130001612-00445.warc.gz"} |
https://math.stackexchange.com/questions/331539/combining-rotation-quaternions | # Combining rotation quaternions
If I combine 2 rotation quaternions by multiplying them, lets say one represents some rotation around x axis and other represents some rotation around some arbitrary axis.
The order of rotation matters, so the order of the quaternion multiplication to "combine" the rotation matters also.
My question is, how does the combining of quaternion rotations work? Is it like matrix transformations, where
$$(M_2 M_1) p = M_2 (M_1 p) \, ?$$
The point $p$ will be transformed by $M_1$, and then by $M_2$, even though technically it's just being multiplied by $M_2 M_1$. Do rotation quaternions work the same way? Does the earliest rotation have to be on the right side, and then subsequent rotations are applied by multiplying on the left?
If you check some of the resources in the earlier question you'll find that the most useful way quaternions act as rotations is by conjugation.
Think of the $i,j,k$ vectors as orthonormal vectors in 3-dimensional space, as we usually do in physics. Every point in 3-space then is just a linear combination of these three vectors. These are the "pure quaternions" whose real parts are 0.
Given a quaternion with norm 1, call it $u$, you can rotate a pure quaternions $v$ by conjugating: $v\mapsto uvu^{-1}$. Let $w$ be another quaternion with norm 1. Then as you observed, you can rotate by $u$ and $w$ in two different orders:
$$wuvu^{-1}w^{-1}=(wu)v(wu)^{-1}$$
or
$$uwvw^{-1}u^{-1}=(uw)v(uw)^{-1}$$
which potentially can be different.
Let's try it with a few very simple choices of $u$ and $w$. Try $u=i$ and $w=j$, and see what happens to the $i,j,k$ vectors under those rotations. If we try this with $u=i$, you can check that $$i\mapsto iii^{-1}=i$$ $$j\mapsto iji^{-1}=-j$$ $$k\mapsto iki^{-1}=-k$$.
Visualize what has happened to the original triad $i,j,k$ after rotation. I'll leave the other example to you.
To customize length 1 quaternions that rotate things the way you want to, you'll have to take a look at the wiki article. Basically the idea is this: every rotation in 3-space is specified by an axis of rotation and the angle you rotate about that axis. To find your customized $u$, you first compute a unit quaternion $h$ which is normal to the plane of rotation, and then an expression like $u=\cos(\theta/2)+h\sin(\theta/2)$ turns out to be what you want. (I haven't been careful about specifying the direction and rotation or signs in this sketch, so take care when following the detailed explanation.)
• OK, so I think this is a good answer. I'm really learning something by breaking this down. I have only this to say. When strung together, the different quaternions w, u and v basically melt together. I can hardly tell which is which when they are displayed in that math font. :/ – John Leidegren Aug 16 '16 at 9:02
To rotate a vector $v = ix + jy + kz$ by a quaternion $q$ you compute $v^q = q v q^{-1}$.
So if $q$ and $q'$ are two rotation quaternions, to rotate by $q$ then $q'$ you calculate $(v^q)^{q'} = q' q \,v\, q^{-1} q'^{-1} = q' q \,v\, (q' q)^{-1} = v^{q'q}.$
### References
Quaternions and spatial rotation | 2019-07-22 20:43:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8340222835540771, "perplexity": 290.90474616862105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528220.95/warc/CC-MAIN-20190722201122-20190722223122-00243.warc.gz"} |
https://math.stackexchange.com/questions/2525244/sock-picking-without-replacement-probability | # Sock picking without replacement (Probability)
Question:
The chance of picking a red sock out of a drawer of infinite socks is $1\over3$ and the chance of picking a blue sock is $2\over3$
What's the chance that if I pick $20$ socks out of these, $19$ are blue?
Attempt:
I tried to find the probability of $P(\text{Blue} = 19 \text{ & Red} = 1)$ and multiplying it by the number of ways this could happen.
So,
$${P(\text{Blue} = 19 \text{ & Red} = 1)} = {2\over3}^{19} \cdot {1\over3}^1 = 0.00001504$$ Permutations: $\frac{20!}{19!} = 20$
Solution $= 20*0.00001504.$
I know this is wrong because I tried the above procedure with $P(\text{Blue} = 6 \text{ & Red} = 3)$, which intuitively should work out to 1, but did not get the result.
What am I missing here?
• Your calculation seems right to me. Can you explain why P(blue = 6 and red = 3) should be 1? I don't see why that should happen, unless I am missing something. – Abhiram Natarajan Nov 17 '17 at 20:34 | 2019-09-21 03:20:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8424504995346069, "perplexity": 197.58120217594364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574182.31/warc/CC-MAIN-20190921022342-20190921044342-00364.warc.gz"} |
https://www.nature.com/articles/s41467-018-05649-9/?error=cookies_not_supported&code=80a541c4-288f-4de2-a58b-234b4d17bdb5 | Article | Open | Published:
# Ancient DNA from Chalcolithic Israel reveals the role of population mixture in cultural transformation
## Abstract
The material culture of the Late Chalcolithic period in the southern Levant (4500–3900/3800 BCE) is qualitatively distinct from previous and subsequent periods. Here, to test the hypothesis that the advent and decline of this culture was influenced by movements of people, we generated genome-wide ancient DNA from 22 individuals from Peqi’in Cave, Israel. These individuals were part of a homogeneous population that can be modeled as deriving ~57% of its ancestry from groups related to those of the local Levant Neolithic, ~17% from groups related to those of the Iran Chalcolithic, and ~26% from groups related to those of the Anatolian Neolithic. The Peqi’in population also appears to have contributed differently to later Bronze Age groups, one of which we show cannot plausibly have descended from the same population as that of Peqi’in Cave. These results provide an example of how population movements propelled cultural changes in the deep past.
## Introduction
The material culture of the Late Chalcolithic period in the southern Levant contrasts qualitatively with that of earlier and later periods in the same region. The Late Chalcolithic in the Levant is characterized by increases in the density of settlements, introduction of sanctuaries1,2,3, utilization of ossuaries in secondary burials4,5, and expansion of public ritual practices as well as an efflorescence of symbolic motifs sculpted and painted on artifacts made of pottery, basalt, copper, and ivory6,7,8,9. The period’s impressive metal artifacts, which reflect the first known use of the “lost wax” technique for casting of copper, attest to the extraordinary technical skill of the people of this period10,11.
The distinctive cultural characteristics of the Late Chalcolithic period in the Levant (often related to the Ghassulian culture, although this term is not in practice applied to the Galilee region where this study is based) have few stylistic links to the earlier or later material cultures of the region, which has led to extensive debate about the origins of the people who made this material culture. One hypothesis is that the Chalcolithic culture in the region was spread in part by immigrants from the north (i.e., northern Mesopotamia), based on similarities in artistic designs12,13. Others have suggested that the local populations of the Levant were entirely responsible for developing this culture, and that any similarities to material cultures to the north are due to borrowing of ideas and not to movements of people2,14,15,16,17,18,19.
To explore these questions, we studied ancient DNA from a Chalcolithic site in Northern Israel, Peqi’in (Fig. 1a). This cave, which is around 17 m long and 4.5–8.0 m wide (Fig. 1b), was discovered during road construction in 1995, and was sealed by natural processes during or around the end of the Late Chalcolithic period (around 3900 BCE). Archeological excavations have revealed an extraordinary array of finely crafted objects, including chalices, bowls, and churns, as well as more than 200 ossuaries and domestic jars repurposed as ossuaries (the largest number ever found in a single cave), often decorated with anthropomorphic designs (Fig. 1c)20,21. It has been estimated that the burial cave contained up to 600 individuals22, making it the largest burial site ever identified from the Late Chalcolithic period in the Levant. Direct radiocarbon dating suggests that the cave was in use throughout the Late Chalcolithic (4500–3900 BCE), functioning as a central burial location for the region21,23.
Previous genome-wide ancient DNA studies from the Near East have revealed that at the time when agriculture developed, populations from Anatolia, Iran, and the Levant were approximately as genetically differentiated from each other as present-day Europeans and East Asians are today24,25. By the Bronze Age, however, expansion of different Near Eastern agriculturalist populations—Anatolian, Iranian, and Levantine—in all directions and admixture with each other substantially homogenized populations across the region, thereby contributing to the relatively low genetic differentiation that prevails today24. Lazaridis et al.24 showed that the Levant Bronze Age population from the site of 'Ain Ghazal, Jordan (2490–2300 BCE) could be fit statistically as a mixture of around 56% ancestry from a group related to Levantine Pre-Pottery Neolithic agriculturalists (represented by ancient DNA from Motza, Israel and 'Ain Ghazal, Jordan; 8300–6700 BCE) and 44% related to populations of the Iranian Chalcolithic (Seh Gabi, Iran; 4680–3662 calBCE). Haber et al.26 suggested that the Canaanite Levant Bronze Age population from the site of Sidon, Lebanon (~1700 BCE) could be modeled as a mixture of the same two groups albeit in different proportions (48% Levant Neolithic-related and 52% Iran Chalcolithic-related). However, the Neolithic and Bronze Age sites analyzed so far in the Levant are separated in time by more than three thousand years, making the study of samples that fill in this gap, such as those from Peqi’in, of critical importance.
In a dedicated clean room facility at Harvard Medical School, we obtained bone powder from 48 skeletal remains, of which 37 were petrous bones known for excellent DNA preservation27. We extracted DNA28 and built next-generation sequencing libraries to which we attached unique barcodes to minimize the possibility of contamination. We treated the libraries with Uracil–DNA glycosylase (UDG) to reduce characteristic ancient DNA damage at all but the first and last nucleotides29 (Supplementary Table 1 and Supplementary Data 1 provide background for successful samples and report information for each library, respectively). After initial screening by enriching the libraries for mitochondrial DNA, we enriched promising libraries for sequences overlapping about 1.2 million single nucleotide polymorphisms (SNPs)30,31. We evaluated each individual for evidence of authentic ancient DNA by limiting to libraries with a minimum of 3% cytosine-to-thymine errors at the final nucleotide29, by requiring that the ratio of X-to-Y-chromosome sequences was characteristic of either a male or a female, by requiring >95% matching to the consensus sequence of mitochondrial DNA30, and by requiring (for males) a lack of variation at known polymorphic positions on chromosome X (point estimates of contamination of less than 2%)32. We also restricted to individuals with at least 5000 of the targeted SNPs covered at least once.
This procedure produced genome-wide data from 22 ancient individuals from Peqi’in Cave (4500–3900 calBCE), with the individuals having a median of 358,313 of the targeted SNPs covered at least once (range: 25,171–1,002,682). The dataset is of exceptional quality given the typically poor preservation of DNA in the warm Near East, with a higher proportion of samples yielding appreciable coverage of ancient DNA than has previously been obtained from the region, likely reflecting the optimal sampling techniques we used and the favorable preservation conditions at the cave. We analyzed this dataset in conjunction with previously published datasets of ancient Near Eastern populations24,26 to shed light on the history of the individuals buried in the Peqi’in cave site, and on the population dynamics of the Levant during the Late Chalcolithic period.
## Results
### Genetic differentiation and diversity in the ancient Levant
A total of 20 Peqi’in samples appear to be unrelated to each other to the limits of our resolution (that is, genetic analysis suggested that they were not first, second, or third degree relatives of each other), and we used these as our analysis set. Taking advantage of the new data point added by the Peqi’in samples, we began by studying how genetic differentiation among Levantine populations changed over time. We replicate previous reports of dramatic decline in genetic differentiation over time in West Eurasia24, observing a median pairwise FST of 0.023 (range: 0.009–0.061) between the Peqi’in samples (abbreviation: Levant_ChL) and other West Eurasian Neolithic and Chalcolithic populations, relative to a previously reported median pairwise FST of 0.098 (range: 0.023–0.153) observed between populations in pre-Neolithic periods, 0.015 (range: 0.002–0.045) in the Bronze Age periods, and 0.011 (range: 0–0.046) in present-day West Eurasian populations24. Thus, the collapse to present-day levels of differentiation was largely complete by the Chalcolithic (Supplementary Figure 1).
We also observe an increase in genetic diversity over time in the Levant as measured by the rate of polymorphism between two random genome sequences at each SNP analyzed in our study. Specifically, the Levant_ChL population exhibits an intermediate level of heterozygosity relative to the earlier and later populations (Fig. 2).
Both the increasing genetic diversity over time, and the reduced differentiation between populations as measured via FST, are consistent with a model in which gene flow reduced differentiation across groups while increasing diversity within groups.
### Genetic affinities of the individuals of Peqi’in Cave
To obtain a qualitative picture of how these individuals relate to previously published ancient DNA and to present-day people, we began by carrying out principal component analysis (PCA)33. In a plot of the first and second principal components (Fig. 3a), the samples from Peqi’in Cave form a tight cluster, supporting the grouping of these individuals into a single analysis population (while we use the broad name “Levant_ChL” to refer to these samples, we recognize that they are currently the only ancient DNA available from the Levant in this time period and future work will plausibly reveal genetic substructure in Chalcolithic samples over the broad region). The Levant_ChL cluster overlaps in the PCA with a cluster containing Neolithic Levantine samples (Levant_N), although it is slightly shifted upward on the plot toward a cluster corresponding to samples from the Levant Bronze Age, including samples from 'Ain Ghazal, Jordan (Levant_BA_South) and Sidon, Lebanon (Levant_BA_North). The placement of the Levant_ChL cluster is consistent with a previously observed pattern whereby chronologically later Levantine populations are shifted towards the Iran Chalcolithic (Iran_ChL) population compared to earlier Levantine populations, Levant_N (Pre-Pottery and Pottery Neolithic agriculturalists from present-day Israel and Jordan) and Natufians (Epipaleolithic hunter-gatherers from present-day Israel)24.
ADMIXTURE model-based clustering analyses34 produced results consistent with PCA in suggesting that individuals from the Levant_ChL population had a greater affinity on average to Iranian agriculturalist-related populations than was the case for earlier Levantine individuals. Figure 3b shows the ADMIXTURE results for the ancient individuals assuming K = 11 clusters (we selected this number because it maximizes ancestry components that are correlated to ancient populations from the Levant, from Iran, and European hunter-gatherers)24. Like all Levantine populations, the primary ancestry component assigned to the Levant_ChL population, shown in blue, is maximized in earlier Levant_N and Natufian individuals. ADMIXTURE also assigns a component of ancestry in Levant_ChL, shown in green, to a population that is generally absent in the earlier Levant_N and Natufian populations, but is present in later Levant_BA_South and Levant_BA_North samples. This green component is also inferred in small proportions in several samples assigned to the Levant_N, but there is not a clear association to archaeological location or date, and these individuals are not significantly genetically distinct from the other individuals included in Levant_N by formal testing, and thus we pool all Levant_N for the primary analyses in this study (Supplementary Note 1)24.
### Population continuity and admixture in the Levant
To determine the relationship of the Levant_ChL population to other ancient Near Eastern populations, we used f-statistics35 (see Supplementary Note 2 for more details). We first evaluated whether the Levant_ChL population is consistent with descending directly from a population related to the earlier Levant_N. If this was the case, we would expect that the Levant_N population would be consistent with being more closely related to the Levant_ChL population than it is to any other population, and indeed we confirm this by observing positive statistics of the form f4 (Levant_ChL, A; Levant_N, Chimpanzee) for all ancient test populations, A (Fig. 4a). However, Levant_ChL and Levant_N population do not form a clade, as when we compute symmetry statistics of the form f4 (Levant_N, Levant_ChL; A, Chimpanzee), we find that the statistic is often negative, with Near Eastern populations outside the Levant sharing more alleles with Levant_ChL than with Levant_N (Fig. 4b). We conclude that while the Levant_N and Levant_ChL populations are clearly related, the Levant_ChL population cannot be modeled as descending directly from the Levant_N population without additional admixture related to ancient Iranian agriculturalists. Direct evidence that Levant_ChL is admixed comes from the statistic f3 (Levant_ChL; Levant_N, A), which for some populations, A, is significantly negative indicating that allele frequencies in Levant_ChL tend to be intermediate between those in Levant_N and A—a pattern that can only arise if Levant_ChL is the product of admixture between groups related, perhaps distantly, to Levant_N and A35. The most negative f3- and f4-statistics are produced when A is a population from Iran or the Caucasus. This suggests that the Levant_ChL population is descended from a population related to Levant_N, but also harbors ancestry from non-Levantine populations related to those of Iran or the Caucasus that Levant_N does not share (or at least share to the same extent).
### The ancestry of the Levant Chalcolithic people
We used qpAdm as our main tool for identifying plausible admixture models for the ancient populations for which we have data (see Supplementary Note 3 for more details)36.
The qpAdm method evaluates whether a tested set of N “Left” populations—including a “target” population (the population whose ancestry is being modeled) and a set of N − 1 additional populations—are consistent with being derived from mixtures in various proportions of N − 1 ancestral populations related differentially to a set of outgroup populations, referred to as “Right” populations. For all our analyses, we use a base set of 11 “Right” outgroups referred to collectively as “09NW”—Ust_Ishim, Kostenki14, MA1, Han, Papuan, Onge, Chukchi, Karitiana, Mbuti, Natufian, and WHG—whose value for disentangling divergent strains of ancestry present in ancient Near Easterners has been documented in Lazaridis et al.24 (for some analyses we supplement this set with additional outgroups). To evaluate whether the “Left” populations are consistent with a hypothesis of being derived from N − 1 sources, qpAdm effectively computes all possible statistics of the form f4(Lefti, Leftj; Rightk, Rightl), for all possible pairs of populations in the proposed “Left” and “Right sets. It then determines whether all the statistics can be written as a linear combination of f4-statistics corresponding to the differentiation patterns between the proposed N − 1 ancestral populations, appropriately accounting for the covariance of these statistics and computing a single p value for fit based on a Hotelling T-squared distribution36. For models that are consistent with the data (p > 0.05), qpAdm estimates proportions of admixture for the target population from sources related to the N − 1 ancestral populations (with standard errors). Crucially, qpAdm does not require specifying an explicit model for how the “Right” outgroup populations are related.
We first examined all possible “Left” population sets that consisted of Levant_ChL along with one other ancient population from the analysis dataset. Testing a wide range of ancient populations, we found that p values for all possible Left populations were below 0.05 (Supplementary Data 2), showing that Levant_ChL is not consistent with being a clade with any of them relative to the “Right” 09NW outgroups. We then considered models with “Left” population sets containing Levant_ChL along with two additional ancient populations, which corresponds to modeling the Levant_ChL as the result of a two-way admixture between populations related to these two other ancient populations. To reduce the number of hypotheses tested, we restricted the models to pairs of source populations that contain at least one of the six populations that we consider to be the most likely admixture sources based on geographical and temporal proximity: Anatolia_N, Anatolia_ChL, Armenia_ChL, Iran_ChL, Iran_N, and Levant_N. Again, we find no plausible two-way admixture models using a p > 0.05 threshold (Supplementary Figure 2 and Supplementary Data 3). Finally, we tested possible three-way admixture events, restricting to triplets that contain at least two of the six most likely admixture sources. Plausible solutions at p > 0.05 are listed in Table 1 (full results are reported in Supplementary Figure 3 and Supplementary Data 4).
We found multiple candidates for three-way admixture models, always including (1) Levant_N (2) either Anatolia_N or Europe_EN and (3) either Iran_ChL, Iran_N, Iran_LN, Iran_HotuIIIb or Levant_BA_North. These are all very similar models, as Europe_EN (early European agriculturalists) are known to be genetically primarily derived from Anatolian agriculturalists (Anatolia_N)31, and Levant_BA_North has ancestry related to Levant_N and Iran_ChL26. To distinguish between models involving Anatolian Neolithic (Anatolia_N) and European Early Neolithic (Europe_EN), we repeated the analysis including additional outgroup populations in the “Right” set that are sensitive to the European hunter-gatherer-related admixture present to a greater extent in Europe_EN than in Anatolia_N (Supplementary Figure 4a)31 (thus, we added Switzerland_HG, SHG, EHG, Iberia_BA, Steppe_Eneolithic, Europe_MNChL, Europe_LNBA to the “Right” outgroups; abbreviations in Supplementary Table 2). We found that only models involving Levant_N, Anatolia_N, and either Iran_ChL or Levant_BA_North passed at p > 0.05 (Table 1). To distinguish between Iran_ChL and Levant_BA_North, we added Iran_N to the outgroup set (for a total of 19 = 11 + 8 outgroups) (Supplementary Figure 4b). Only the model involving Iran_ChL remained plausible. Based on this uniquely fitting qpAdm model we infer the ancestry of Levant_ChL to be the result of a three-way admixture of populations related to Levant_N (57%), Iran_ChL (17%), and Anatolia_N (26%).
### The ancestry of late Levantine Bronze Age populations
It was striking to us that previously published Bronze Age Levantine samples from the sites of 'Ain Ghazal in present-day Jordan (Levant_BA_South) and Sidon in present-day Lebanon (Levant_BA_North) can be modeled as two-way admixtures, without the Anatolia_N contribution that is required to model the Levant_ChL population24,26. This suggests that the Levant_ChL population may not be directly ancestral to these later Bronze Age Levantine populations, because if it were, we would also expect to detect an Anatolia_N component of ancestry. In what follows, we treat Levant_BA_South and Levant_BA_North as separate populations for analysis, since the symmetry statistic f4(Levant_BA_North, Levant_BA_South; A, Chimp) is significant for a number test populations A (|Z| ≥ 3) (Supplementary Data 5), consistent with the different estimated proportions of Levant_N and Iran_ChL ancestry reported in24,26.
To test the hypothesis that Levant_ChL may be directly ancestral to the Bronze Age Levantine populations, we attempted to model both Levant_BA_South and Levant_BA_North as two-way admixtures between Levant_ChL and every other ancient population in our dataset, using the base 09NW set of populations as the “Right” outgroups. We also compared these models to the previously published models that used the Levant_N and Iran_ChL populations as sources (Table 2; Supplementary Figure 5; Supplementary Data 6). In the case of Levant_BA_South from 'Ain Ghazal, Jordan, multiple models were plausible, and thus we returned to the strategy of adding additional “Right” population outgroups that are differentially related to one or more of the “Left” populations (specifically, we added various combinations of Armenia_EBA, Steppe_EMBA, Switzerland_HG, Iran_LN, and Iran_N). Only the model including Levant_N and Iran_ChL remains plausible under all conditions. Thus, we can conclude that groups related to Levant_ChL contributed little ancestry to Levant_BA_South.
We observe a qualitatively different pattern in the Levant_BA_North samples from Sidon, Lebanon, where models including Levant_ChL paired with either Iran_N, Iran_LN, or Iran_HotuIIIb populations appear to be a significantly better fit than those including Levant_N + Iran_ChL. We largely confirm this result using the “Right” population outgroups defined in Haber et al.26 (abb. Haber: Ust_Ishim, Kostenki14, MA1, Han, Papuan, Ami, Chuckhi, Karitiana, Mbuti, Switzerland_HG, EHG, WHG, and CHG), although we find that the specific model involving Iran_HotuIIIb no longer works with this “Right” set of populations. Investigating this further, we find that the addition of Anatolia_N in the “Right” outgroup set excludes the model of Levant_N + Iran_ChL favored by Haber et al.26. These results imply that a population that harbored ancestry more closely related to Levant_ChL than to Levant_N contributed to the Levant_BA_North population, even if it did not contribute detectably to the Levant_BA_South population.
We obtained additional insight by running qpAdm with Levant_BA_South as a target of two-way admixture between Levant_N and Iran_ChL, but now adding Levant_ChL and Anatolia_N to the basic 09NW “Right set of 11 outgroups. The addition of the Levant_ChL causes the model to fail, indicating that Levant_BA_South and Levant_ChL share ancestry following the separation of both of them from the ancestors of Levant_N and Iran_ChL. Thus, in the past there existed an unsampled population that contributed both to Levant_ChL and to Levant_BA_South, even though Levant_ChL cannot be the direct ancestor of Levant_BA_South because, as described above, it harbors Anatolia_N-related ancestry not present in Levant_BA_South.
### Genetic heterogeneity in the Levantine Bronze Age
We were concerned that our finding that the Levant_ChL population was a mixture of at least three groups might be an artifact of not having access to samples closely related to the true ancestral populations. One specific possibility we considered is that a single ancestral population admixed into the Levant to contribute to both the Levant_ChL and the Levant_BA_South populations, and that this was an unsampled population on an admixture cline between Anatolia_N and Iran_ChL, explaining why qpAdm requires three source populations to model it. To formally test this hypothesis, we used qpWave36,37,38, which determines the minimum number of source populations required to model the relationship between “Left populations relative to “Right outgroup populations. Unlike qpAdm, qpWave does not require that populations closely related to the true source populations are available for analysis. Instead it treats all “Left” populations equally, and attempts to determine the minimum number of theoretical source populations required to model the “Left” population set, relative to the “Right” population outgroups. Therefore, we model the relationship between Levant_N, Levant_ChL, and Levant_BA_South as “Left populations, relative to the 09NW “Right” outgroup populations (Table 3). We find that a minimum of three source populations continues to be required to model the ancestry of these Levantine populations, supporting a model in which at least three separate sources of ancestry are present in the Levant between the Neolithic, Chalcolithic, and Bronze Age.
We applied qpWave again, replacing Levant_ChL with Levant_BA_North, and found that the minimum number of source populations is only two. However, when we include the Levant_ChL population as an additional outgroup, three source populations are again required. This suggests that in the absence of the data from Levant_ChL there is insufficient statistical leverage to detect Anatolian-related ancestry that is truly present in admixed form in the Levant_BA_North population (data from the Levant_ChL population makes it possible to detect this ancestry). This may explain why Haber et al.26 did not detect the Anatolian Neolithic-related admixture in Levant_BA_North.
### Biologically important mutations in the Peqi’in population
This study nearly doubles the number of individuals with genome-wide data from the ancient Levant. Measured in terms of the average coverage at SNPs, the increase is even more pronounced due to the higher quality of the data reported here than in previous studies of ancient Near Easterners24,26. Thus, the present study substantially increases the power to analyze the change in frequencies of alleles known to be biologically important.
We leveraged our data to examine the change in frequency of SNP alleles known to be related to metabolism, pigmentation, disease susceptibility, immunity, and inflammation in the Levant_ChL population, considered in relation to allele frequencies in the Levant_N, Levant_BA_North, Levant_BA_South, Anatolia_N and Iran_ChL populations and present-day pools of African (AFR), East Asian (EAS), and European (EUR) ancestry in the 1000 Genomes Project Phase 3 dataset39 (Supplementary Data 7).
We highlight three findings of interest. First, an allele (G) at rs12913832 near the OCA2 gene, with a proven association to blue eye color in individuals of European descent40, has an estimated alternative allele frequency of 49% in the Levant_ChL population, suggesting that the blue-eyed phenotype was common in the Levant_ChL.
Second, an allele at rs1426654 in the SLC24A5 gene which is one of the most important determinants of light pigmentation in West Eurasians41 is fixed for the derived allele (A) in the Levant_ChL population suggesting that a light skinned phenotype may have been common in this population, although any inferences about skin pigmentation based on allele frequencies observed at a single site need to be viewed with caution42.
Third, an allele (G) at rs6903823 in the ZKSCAN3 and ZSCAN31 genes which is absent in all early agriculturalists reported to date (Levant_N, Anatolia_N, Iran_N) and that has been argued to have been under positive selection by Mathieson et al.31, occurs with an estimated frequency of 20% in the Levant_ChL, 17% in the Levant_BA_South, and 15% in the Iran_ChL populations, while it is absent in all other populations. This suggests that the allele was rising in frequency in Chalcolithic and Bronze Age Near Eastern populations at the same time as it was rising in frequency in Europe.
## Discussion
The Chalcolithic period in the Levant witnessed major cultural transformations in virtually all areas of culture, including craft production, mortuary and ritual practices, settlement patterns, and iconographic and symbolic expression43. The current study provides insight into a long-standing debate in the prehistory of the Levant, implying that the emergence of the Chalcolithic material culture was associated with population movement and turnover.
The quality of ancient DNA obtained from the Peqi’in Cave samples is excellent relative to other sites in the Near East. We hypothesize that the exceptional preservation is due to two factors. First, the targeted sampling of ancient DNA from the petrous portion of the temporal bone makes it possible to obtain high-quality ancient DNA from previously inaccessible geographic regions24,27,44,45. Secondly, the environment of Peqi’in Cave is likely to be favorable for DNA preservation. The skeletal remains—either stored in ossuaries or laid in the ground—were quickly covered by a limestone crust, isolating them from their immediate surroundings and protecting them from acidic conditions that are known to be damaging to DNA.
We find that the individuals buried in Peqi’in Cave represent a relatively genetically homogenous population. This homogeneity is evident not only in the genome-wide analyses but also in the fact that most of the male individuals (nine out of ten) belong to the Y-chromosome haplogroup T (see Supplementary Table 1), a lineage thought to have diversified in the Near East46. This finding contrasts with both earlier (Neolithic and Epipaleolithic) Levantine populations, which were dominated by haplogroup E24, and later Bronze Age individuals, all of whom belonged to haplogroup J24,26.
Our finding that the Levant_ChL population can be well-modeled as a three-way admixture between Levant_N (57%), Anatolia_N (26%), and Iran_ChL (17%), while the Levant_BA_South can be modeled as a mixture of Levant_N (58%) and Iran_ChL (42%), but has little if any additional Anatolia_N-related ancestry, can only be explained by multiple episodes of population movement. The presence of Iran_ChL-related ancestry in both populations – but not in the earlier Levant_N – suggests a history of spread into the Levant of peoples related to Iranian agriculturalists, which must have occurred at least by the time of the Chalcolithic. The Anatolian_N component present in the Levant_ChL but not in the Levant_BA_South sample suggests that there was also a separate spread of Anatolian-related people into the region. The Levant_BA_South population may thus represent a remnant of a population that formed after an initial spread of Iran_ChL-related ancestry into the Levant that was not affected by the spread of an Anatolia_N-related population, or perhaps a reintroduction of a population without Anatolia_N-related ancestry to the region. We additionally find that the Levant_ChL population does not serve as a likely source of the Levantine-related ancestry in present-day East African populations (see Supplementary Note 4)24.
These genetic results have striking correlates to material culture changes in the archaeological record. The archaeological finds at Peqi’in Cave share distinctive characteristics with other Chalcolithic sites, both to the north and south, including secondary burial in ossuaries with iconographic and geometric designs. It has been suggested that some Late Chalcolithic burial customs, artifacts and motifs may have had their origin in earlier Neolithic traditions in Anatolia and northern Mesopotamia8,13,47. Some of the artistic expressions have been related to finds and ideas and to later religious concepts such as the gods Inanna and Dumuzi from these more northern regions6,8,47,48,49,50. The knowledge and resources required to produce metallurgical artifacts in the Levant have also been hypothesized to come from the north11,51.
Our finding of genetic discontinuity between the Chalcolithic and Early Bronze Age periods also resonates with aspects of the archeological record marked by dramatic changes in settlement patterns43, large-scale abandonment of sites52,53,54,55, many fewer items with symbolic meaning, and shifts in burial practices, including the disappearance of secondary burial in ossuaries56,57,58,59. This supports the view that profound cultural upheaval, leading to the extinction of populations, was associated with the collapse of the Chalcolithic culture in this region18,60,61,62,63,64.
These ancient DNA results reveal a relatively genetically homogeneous population in Peqi’in. We show that the movements of people within the region of the southern Levant were remarkably dynamic, with some populations, such as the one buried at Peqi’in, being formed in part by exogenous influences. This study also provides a case-study relevant beyond the Levant, showing how combined analysis of genetic and archaeological data can provide rich information about the mechanism of change in past societies.
## Methods
### Data generation
We screened all libraries for authentic DNA by enriching for the mitochondrial genome and 50 nuclear target loci, followed by sequencing on an Illumina NextSeq500 instrument for 2 × 75 cycles and 2 × 7 cycles to read out the indices. We enriched promising libraries for approximately 1.2 M SNPs as described in refs. 31,36,66,] and then sequenced on a NextSeq500 sequencer using 75 base pair paired-end sequences. During computational processing, we initially stripped identifying oligonucleotide sequences and adapters, separating individual samples from pooled captures by their identifying 7 base pair indices at the 5′ and 3′ ends of reads, and requiring matches to sample-specific barcodes appended directly to the sequence fragments, allowing no more than one mismatch per index/barcode. We used SeqPrep67 to strip adapters and also to merge paired end reads into single sequences by requiring a minimum of 15 base pair overlap (allowing up to one mismatch), using the highest quality base in the merged region where there was a conflict. We used samse in bwa (v0.6.1)68 to align reads. For the mitochondrial DNA enrichment experiment we aligned to the RSRS mitochondrial genome69. For the whole-genome enrichment experiment we aligned to the hg19 reference genome. We identified duplicate sequences as ones with the same start and end positions and orientation and also identical barcode pairs, and retained the highest quality sequence from each duplicate. We made pseudo-haploid SNP calls for each position using a randomly chosen sequence covering each targeted site, stripping the two bases at the ends of each sequence to remove deaminated mutations, and requiring a minimum mapping quality (MAPQ ≥ 10), and restricting to sites with a minimum base quality (≥20).
We assessed the quality of each library at the screening stage using three standard methods for determining ancient DNA authenticity. First, we analyzed mitochondrial genome data to determine the rate of matching to the consensus sequence, using contamMix30. Second, we restricted to samples in which the rate of C-to-T substitutions in terminal nucleotides was at least 3%, as expected for genuine ancient DNA using the partial UDG treatment protocol29. Finally, we used the ANGSD software to obtain a conservative estimate of contamination in the X-chromosome of individuals determined to be male based on the rate of polymorphism on X-chromosome sequences (males have only a single X-chromosome and so are not expected to show polymorphism); we excluded libraries with X-contamination estimates greater than 1.5%32. For samples where multiple libraries were produced for a single individual, we merged libraries that passed quality control, and obtained new pseudo-haploid SNP calls.
We determined mitochondrial DNA haplogroups using the tool haplogrep270, using a consensus sequence built from reads enriched for the mitochondrial genome, restricting to damaged reads using PMDtools71 (pmdscore ≥ 3), and trimming 5 bases from each end to greatly reduce the error rate due to deamination.
Ancient DNA presents challenges in the assignment of Y-chromosome haplogroups due to the chance that there may be contamination, DNA damage or missing data present in them. In order to assign Y haplogroups to our data, we used a modified version of the procedure used in the analysis of modern Y chromosomes in the 1000 Genomes Project72, which uses a breadth-first search to traverse the Y-chromosome tree. We made our calls on the ISOGG tree from 04.01.2016 [http://isogg.org], and modified the caller to output derived and ancestral allele calls for each informative position on the tree. We then assigned a score to each of the reference haplogroups by counting the number of mismatches in the number of observed derived alleles on that branch and down-weighted derived mutations that were transitions to 1/3 that of transversions to account for DNA damage related errors. We assigned the sample to the reference haplogroup with the closest match based on this score. While we endeavored to produce a call on each sample, we note that samples with fewer than 100,000 SNPs have too little data to confidently identify the correct haplogroup, and we encourage caution when interpreting these results.
The data from the 22 samples that passed contamination and quality control tests are reported in Supplementary Table 1, with an average of 0.97× coverage on the 1240 k SNP targets, and an average of 358,313 SNPs covered at least once. A by-library table describing the screening results is reported in Supplementary Data 1. We excluded two individuals from further analysis, as the genetic patterns observed using the method described in Kuhn et al.73. Showed that they were first-degree relatives of higher coverage samples in the dataset. We restricted data from sample I1183 to include only sequences with evidence of C-to-T substitution in order to minimize contamination which was evident in the full data from these samples.
We combined the newly reported data with existing data from Lazaridis et al.24 and Haber et al.26, using the mergeit program of EIGENSOFT33. The resulting datasets, referred to as HO + and HOIll+, contain the 20 new unrelated samples combined with HO and HOIll from Lazaridis et al.24 and 5 ancient samples from Sidon, Lebanon (population name: Levant_BA_North) from Haber et al.26, respectively. HO+ includes data from 2891 modern and ancient individuals at 591,642 SNPs, and HOIll+ includes data from 306 ancient individuals at 1,054,637 SNPs.
### Principal component analysis
We performed PCA on the HO+ dataset using smartpca33. We used a total of 984 present-day individuals for PCA, and projected the 306 ancient samples. We used default parameters with lsqproject: YES and numoutlieriter: 0 settings. We estimated FST using smartpca for the 21 ancient Near Eastern populations made up of more than one individual and 8 modern populations using default parameters, with inbreed: YES and fstonly: YES (Supplementary Figure 1). We ran analyses using the HO+ dataset.
We carried out ADMIXTURE analysis34 on the HO+ dataset. Prior to analyses, we pruned SNPs in strong linkage disequilibrium with each other using PLINK74 using the parameters—indep-pairwise 200 25 0.4. We performed ADMIXTURE analysis on the 3,00,885 SNPs remaining in the pruned dataset. For each value of k between 2 and 14, we performed 20 replicate analyses, and we retained the highest likelihood replicate for each k.
### Conditional heterozygosity
We computed conditional heterozygosity for each ancient Levantine population using popstats75. For this analysis we used the HO+ dataset, restricting to SNP sites ascertained from a single Yoruba individual and to transversion SNPs, as described in Skoglund et al.44.
### f-statistics
We computed f4-statistics using the qpDstat program in ADMIXTOOLS35, with default parameters, and f4 mode:YES. We computed f3-statistics using the qp3Pop program in ADMIXTOOLS35, using default parameters, with inbreed: YES. We ran all analyses using the HOIll+ dataset, except for the statistic f4(Levant_BA_North, Levant_BA_South; A, Chimp), which we ran on the HO+ dataset.
We estimated proportions of ancestry in the Levant_ChL population using the qpAdm methodology, with parameters allsnps: YES and details:YES36. We tested both 2- and 3-way admixtures between ancient “Left” populations from the HOIll+ dataset. We used the 09NW populations defined in Lazaridis et al.24 as preliminary outgroups. We selected additional outgroups based on the statistics f4(Anatolia_N, Europe_EN; A, Chimpanzee) and f4(Levant_BA_North, Iran_ChL; A, Chimpanzee), and we repeated qpAdm with each additional outgroup added into the “Right” list until all but one admixture model was eliminated.
We used qpAdm to determine whether the Levant_BA_South and Levant_BA_North populations could be modeled using Levant_ChL as a source population. We tested 2-way admixtures between Levant_ChL and every other ancient “Left” population from the HOIll+ dataset. We also tested the “Left” populations Levant_N and Iran_ChL. We used the 09NW “Right populations as preliminary outgroup populations, and confirmed our findings for Levant_BA_North using the outgroups defined in Haber et al.26. We added additional outgroups to further differentiate between plausible models, and repeated qpAdm analysis until all but one candidate admixture model was eliminated.
### qpWave
We computed the minimum number of streams of ancestry required to model two sets of three Levantine populations (set [1] Levant_N, Levant_ChL, and Levant_BA_South, set [2] Levant_N, Levant_BA_South, Levant_BA_North) using the qpWave37,38 methodology with parameter allsnps:YES.
### Allele frequency comparisons
We examined the frequencies of SNPs associated with phenotypically important functions in the categories of metabolism, pigmentation, disease susceptibility, immunity, and inflammation in Levant_ChL in conjunction with the Levant_N, Levant_BA_North, Levant_BA_South, Anatolia_N and Iran_ChL populations, with allele frequencies for three pooled continental populations (AFR, EAS, EUR) in Phase 3 the 1000 Genomes Project reported where available. We computed allele frequencies at each site of interest by computing the likelihood of the population reference allele frequency given the data, using a method established in Mathieson et al.31. For each population of size, N, we observe Ri sequences that possess the reference allele out of a total Ti sequences. The likelihood of the reference allele frequency, p, in each population given the data D = {X,N,Ri,Ti} is L(p;D) = $$\mathop {\prod }\limits_{i = 1}^N \{ p^2B\left( {R_i,T_i,1 - \varepsilon } \right) + 2p\left( {1 - p} \right)B\left( {R_i,T_i,0.5} \right) + (1 - p)^2B\left( {R_i,T_i,\varepsilon } \right)\}$$ where B(k,n,p) = $$\left( {\begin{array}{*{20}{c}} n \\ k \end{array}} \right)p^k(1 - p)^{n - k}$$ is the binomial probability distribution, and $$\varepsilon$$ is a small probability of error, which we set to 0.001 for our calculations. We estimated allele frequencies by maximizing the likelihood numerically for each population.
### Data availability
The aligned sequences are available through the European Nucleotide Archive under accession number PRJEB27215. Genotype datasets used in analysis are available at https://reich.hms.harvard.edu/datasets.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Change history
• ### 05 September 2018
This Article was originally published without the accompanying Peer Review File. This file is now available in the HTML version of the Article; the PDF was correct from the time of publication.
• ### 20 September 2018
In the original version of this Article, references in the format ‘First author et al.’ were inappropriately deleted. These errors have been corrected in the PDF and HTML versions of the Article.
## References
1. 1.
Ussishkin, D. The Ghassulian shrine at En-gedi. Tel Aviv 7, 1–44 (1980).
2. 2.
Seaton, P. Chalcolithic Cult and Risk Management at Teleilat Ghassul: The Area E Sanctuary (CMP (UK) Ltd., 2008).
3. 3.
Levy, T. E. Archaeology, Anthropology, and Cult—The Sanctuary at Gilat (Israel, London, 2006).
4. 4.
Perrot, J. & Ladiray, D. À Ossuaires De La Région Côtière Palestinienne Au Iv Millénaire Avant L'ère Chrétienne (Association Paléorient, 1980).
5. 5.
van den Brink, E. C. in Shoham (North): Late Chalcolithic Burial Caves in the Lod Valley, Israel (eds van den Brink, E. C. et al.) 175–190 (Israel Antiquities Authority, 2005).
6. 6.
Bar-Adon, P. The Cave of the Treasure (The Israel Exploration Society, 1980).
7. 7.
Drabsch, B. The Mysterious Wall Paintings of Teleilat Ghassul, Jordan (Archaeopress Archaeology, 2015).
8. 8.
Shalem, D. Iconography on Ossuaries and Burial Jars from the Late Chalcolithic Period in Israel in the Context of the Ancient Near East.PhD dissertation, Haifa Univ. (2008).
9. 9.
Shalem, D. Motifs on the Nahal Mishmar hoard and the ossuaries: comparative observations and interpretations. J. Isr. Prehist. Soc. 45, 217–237 (2015).
10. 10.
Goren, Y. Gods & Caves, A. N. D. Scholars Chalcolithic Cult and Metallurgy in the Judean Desert. East. Archaeol. (NEA) 77, 260–266 (2014).
11. 11.
Tadmor, M. et al. The Nahal Mishmar hoard from the Judean desert: technology, composition, and provenance. Atiqot 27, 95–148 (1995).
12. 12.
Anati, E. Palestine Before the Hebrews (Alfred A. Knopf, 1963).
13. 13.
de Vaux, R. in Cambridge Ancient History Vol. 1 (eds Gadd, C. J., Edwards, I. E. S. & Hammond, N. G. L.) 498–538 (Cambridge University Press, 1970).
14. 14.
Bourke, S. J. in The Prehistory of Jordan II: Perspectives From 1997, Studies in Early near Eastern Production, Subsistence, and Environment (eds Kafafi, Z., Rollefson, G. & Gebel, H. G. K.) 395–417 (Ex oriente, 1997).
15. 15.
Bourke, S. J. in Studies in the History and Archaeology of JordanVol. 6 (ed. Zaghloul, I.) 249-259 (Department of Antiquities, 1997).
16. 16.
Gilead, I. The Chalcolithic period in the Levant. J. World Prehist. 2, 397–443 (1988).
17. 17.
Hennessy, J. B. Preliminary report on a first season of excavations at Teleilat Ghassul. Levant 1, 1–24 (1969).
18. 18.
Levy, T. E. in The Archaeology of Society in the Holy Land (ed. Levy, T. E.) 226–244 (Leicester University Press, 1995).
19. 19.
Moore, A. M. The Late Neolithic in Palestine. Levant 5, 36–68 (1973).
20. 20.
Gal, Z., Smithline, H. & Shalem, D. A Chalcolithic burial cave in Peqi’in, Upper Galilee. Isr. Explor. J. 47, 145–154 (1997).
21. 21.
Shalem, D. et al. Peqiʻin: A Late Chalcolithic Burial Site Upper Galilee, Israel (Kinneret Academic College Institute for Galilean Archaeology, 2013).
22. 22.
Nagar, Y. in Peqi’ in: A. Late Chalcolithic Burial Site, Upper Galilee, Israel (eds Shalem. D., Gal, Z. & Smithline, H.) 391–405 (The Institute for Galilean Archaeology, 2013).
23. 23.
Segal, D., Carmi, I., Gal, Z., Smithline, H. & Shalem, D. Dating a Chalcolithic burial cave in Peqi’in, upper Galilee, Israel. Radiocarbon 40, 707–712 (1998).
24. 24.
Lazaridis, I. et al. Genomic insights into the origin of farming in the ancient Near East. Nature 536, 419 (2016).
25. 25.
Broushaki, F. et al. Early Neolithic genomes from the eastern Fertile Crescent. Science 353, 499–503 (2016).
26. 26.
Haber, M. et al. Continuity and admixture in the last five millennia of Levantine history from ancient Canaanite and present-day Lebanese genome sequences. Am. J. Hum. Genet. 101, 274-282 (2017).
27. 27.
Gamba, C. et al. Genome flux and stasis in a five millennium transect of European prehistory. Nat. Commun. 5, 5257 (2014).
28. 28.
Dabney, J. et al. Complete mitochondrial genome sequence of a Middle Pleistocene cave bear reconstructed from ultrashort DNA fragments. Proc. Natl Acad. Sci. 110, 15758-15763 (2013).
29. 29.
Rohland, N., Harney, E., Mallick, S., Nordenfelt, S. & Reich, D. Partial uracil–DNA–glycosylase treatment for screening of ancient DNA. Philos. Trans. R. Soc. B 370, 20130624 (2015).
30. 30.
Fu, Q. et al. DNA analysis of an early modern human from Tianyuan Cave, China. Proc. Natl Acad. Sci. 110, 2223-2227 (2013).
31. 31.
Mathieson, I. et al. Genome-wide patterns of selection in 230 ancient Eurasians. Nature 528, 499–503 (2015).
32. 32.
Korneliussen, T. S., Albrechtsen, A. & Nielsen, R. ANGSD: analysis of next generation sequencing data. BMC Bioinforma. 15, 356 (2014).
33. 33.
Patterson, N., Price, A. L. & Reich, D. Population structure and eigenanalysis. PLoS. Genet. 2, e190 (2006).
34. 34.
Alexander, D. H., Novembre, J. & Lange, K. Fast model-based estimation of ancestry in unrelated individuals. Genome Res. 19, 1655–1664 (2009).
35. 35.
Patterson, N. et al. Ancient admixture in human history. Genetics 192, 1065–1093 (2012).
36. 36.
Haak, W. et al. Massive migration from the steppe was a source for Indo-European languages in Europe. Nature 522, 207–211 (2015).
37. 37.
Moorjani, P. et al. Genetic evidence for recent population mixture in India. Am. J. Human. Genet. 93, 422–438 (2013).
38. 38.
Reich, D. et al. Reconstructing native American population history. Nature 488, 370–374 (2012).
39. 39.
The 1000 Genomes Project Consortium. A global reference for human genetic variation. Nature 526, 68 (2015).
40. 40.
Eiberg, H. et al. Blue eye color in humans may be caused by a perfectly associated founder mutation in a regulatory element located within the HERC2 gene inhibiting OCA2 expression. Hum. Genet. 123, 177–187 (2008).
41. 41.
Soejima, M. & Koda, Y. Population differences of two coding SNPs in pigmentation-related genes SLC24A5 and SLC45A2. Int. J. Leg. Med. 121, 36–39 (2007).
42. 42.
Martin, A. R. et al. An unexpectedly complex architecture for skin pigmentation in Africans. Cell 171, 1340–1353 (2017).
43. 43.
Rowan, Y. M. & Golden, J. The Chalcolithic period of the Southern Levant: a synthetic review. J. World Prehistory 22, 1–92 (2009).
44. 44.
Skoglund, P. et al. Genomic insights into the peopling of the Southwest Pacific. Nature 538, 510 (2016).
45. 45.
Skoglund, P. et al. Reconstructing prehistoric African population structure. Cell 171, 59–71 (2017).
46. 46.
Mendez, F. L. et al. Increased resolution of Y-chromosome haplogroup T defines relationships among populations of the Near East, Europe, and Africa. Hum. Biol. 83, 39–53 (2011).
47. 47.
Shalem, D. Cultural continuity and changes in South Levantine late chalcolithic burial customs and iconographic imagery: an interpretation of the finds from the Peqi’in Cave. J. Isr. Prehist. Soc. 47, 148-170 (2017).
48. 48.
Merhav, R., Heltzer, M., Segal, A. & Kaufman, D. in Studies in the Archaeology and History of Ancient Israel in Honour of Moshe Dothan (eds. Heltzer, M., Segal, A. & Kaufman, D.) 21–42 (Haifa University Press, 1993).
49. 49.
Bar-Yosef, O. & Ayalon, E. Chalcolithic ossuaries: what do they imitate and why? Qadmoniot 34, 34–43 (2001).
50. 50.
Beck, P. in Essays in Ancient Civilization Presented to Helene J. Kantor. Studies in Ancient Oriental Civilization (eds Leonard, A. & Williams, B. B.) 39–54 (The Oriental Institute, 1989).
51. 51.
Yahalom-Mack, N. et al. The earliest lead object in the Levant. PLoS ONE 10, e0142948 (2015).
52. 52.
Gilead, I. The history of the Chalcolithic settlement in the Nahal Beer Sheva area: the radiocarbon aspect. Bull. Am. Sch. Orient. Res. 296, 1–13 (1994).
53. 53.
Bourke, S. et al. The chronology of the Ghassulian Chalcolithic period in the southern Levant: new 14 C determinations from Teleilat Ghassul, Jordan. Radiocarbon 43, 1217–1222 (2001).
54. 54.
Bourke, S. J. & Lovell, J. L. Ghassul, chronology and cultural sequencing. Paléorient 30, 179–182 (2004).
55. 55.
Gilead, I. in Culture, Chronology and the Chalcolithic: Theory and Transition (eds Lovell, J. L. & Rowan, Y. M.) 12–24 (Oxbow Books, 2011).
56. 56.
van den Brink Edwin, C. in Culture, Chronology and the Chalcolithic: Theory and Transition(eds Lovell, J. L. & Rowan, Y. M.) 61–70 (Oxbow Books, 2011).
57. 57.
Vardi, J. & Gilead, I. Chalcolithic early bronze age I transition in the Southern Levant: the Lithic perspective. Paléorient 39, 111–123 (2013).
58. 58.
Milevski, I. The transition from the Chalcolithic to the early bronze age in the Southern Levant in socio-economic context. Paléorient 39, 193–208 (2013).
59. 59.
Braun, E. & Roux, V. The late Chalcolithic to early bronze age I transition in the Southern Levant: determining continuity and discontinuity or “Mind the Gap”. Paléorient 29, 15–22 (2013).
60. 60.
Joffe, A. H. & Dessel, J. Redefining chronology and terminology for the Chalcolithic of the southern Levant. Curr. Anthropol. 36, 507–518 (1995).
61. 61.
Yadin, Y. The earliest record of Egypt’s military penetration into Asia? Some aspects of the Narmer Palette, the “Desert Kites” and mesopotamian seal cylinders. Isr. Explor. J. 5, 1–16 (1955).
62. 62.
Yeivin, S. Early contacts between Canaan and Egypt. Isr. Explor. J. 10, 193–203 (1960).
63. 63.
Ussishkin, D. The” Ghassulian” temple in Ein Gedi and the origin of the hoard from Nahal Mishmar. Biblical Archaeol. 34, 23–39 (1971).
64. 64.
Davidovich, U. The Chalcolithic early bronze age transition: a view from the Judean Desert Caves, Southern Levant. Paléorient, 39, 125–138 (2013).
65. 65.
Korlević, P. et al. Reducing microbial and human contamination in DNA extractions from ancient bones and teeth. Biotechniques 59, 87–93 (2015).
66. 66.
Fu, Q. et al. An early modern human from Romania with a recent Neanderthal ancestor. Nature 524, 216–219 (2015).
67. 67.
Harbison, C. T. et al. Transcriptional regulatory code of a eukaryotic genome. Nature 431, 99–104 (2004).
68. 68.
Li, H. & Durbin, R. Fast and accurate short read alignment with Burrows–Wheeler transform. Bioinformatics 25, 1754–1760 (2009).
69. 69.
Behar, D. M. et al. A “Copernican” reassessment of the human mitochondrial DNA tree from its root. Am. J. Human. Genet. 90, 675–684 (2012).
70. 70.
Weissensteiner, H. et al. HaploGrep2: mitochondrial haplogroup classification in the era of high-throughput sequencing. Nucleic Acids Res. 44, W58–W63 (2016).
71. 71.
Skoglund, P. et al. Separating ancient DNA from modern contamination in a Siberian Neandertal. Proc. Natl Acad. Sci. 111, 2229-2234 (2014).
72. 72.
Poznik, G. D. et al. Punctuated bursts in human male demography inferred from 1244 worldwide Y-chromosome sequences. Nat. Genet. 48, 593 (2016).
73. 73.
Kuhn, J. M. M., Jakobsson, M. & Günther, T. Estimating genetic kin relationships in prehistoric populations. PLoS ONE 13, e0195491 (2018).
74. 74.
Purcell, S. et al. PLINK: a tool set for whole-genome association and population-based linkage analyses. Am. J. Human. Genet. 81, 559–575 (2007).
75. 75.
Skoglund, P. et al. Genetic evidence for two founding populations of the Americas. Nature 525, 104 (2015).
## Acknowledgments
Peqi’in burial Cave was excavated under the auspices of the Israel Antiquities Authority. E.H. was supported by a graduate student fellowship from the Max Planck–Harvard Research Center for the Archaeoscience of the Ancient Mediterranean (MHAAM). D.R. was supported by the U.S. National Science Foundation HOMINID grant BCS-1032255, the U.S. National Institutes of Health grant GM100233, by an Allen Discovery Center grant, and is an investigator of the Howard Hughes Medical Institute. The anthropological study was supported by the Dan David Foundation. We thank Vagheesh Narasimhan for generating and describing Y-chromosome haplogroup calls. We thank Ariel Pokhojaev for creating the map image used in Fig. 1a. We thank John Wakeley for critical comments.
## Author information
### Author notes
1. These authors contributed equally: Éadaoin Harney, Hila May.
2. These authors jointly supervised this work: Israel Hershkovitz, David Reich.
### Affiliations
2. #### Department of Genetics, Harvard Medical School, Boston, MA, 02115, USA
• , Swapan Mallick
• , Iosif Lazaridis
• , Kristin Stewardson
• , Susanne Nordenfelt
• & David Reich
3. #### The Max Planck–Harvard Research Center for the Archaeoscience of the Ancient Mediterranean, Cambridge, MA, 02138, USA
• , Iosif Lazaridis
• & David Reich
4. #### Department of Anatomy and Anthropology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, 6997801, Israel
• Hila May
• & Israel Hershkovitz
5. #### Shmunis Family Anthropology Institute, Dan David Center for Human Evolution and Biohistory Research, Sackler Faculty of Medicine, Steinhardt Natural History Museum, Tel Aviv University, Tel Aviv, 6997801, Israel
• Hila May
• , Rachel Sarig
• & Israel Hershkovitz
6. #### The Institute for Galilean Archaeology, Kinneret Academic College, Kinneret, 15132, Israel
• Dina Shalem
7. #### Broad Institute of MIT and Harvard, Cambridge, 02142, MA, USA
• Swapan Mallick
• , Nick Patterson
• & David Reich
8. #### Howard Hughes Medical Institute, Boston, MA, 02115, USA
• Swapan Mallick
• , Kristin Stewardson
• , Susanne Nordenfelt
• , Nick Patterson
• & David Reich
9. #### The Maurice and Gabriela Goldschleger School of Dental Medicine, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, 6997801, Israel
• Rachel Sarig
### Contributions
H.M, I.H., and D.R. conceived the study. D.R. supervised the ancient DNA work, sequencing, and data analysis. H.M, D.S, R.S., and I.H. assembled, studied, or described the archaeological material. E.H., H.M., N.R., K.S, and S.N performed or supervised wet laboratory work. S.M performed bioinformatics analyses. E.H. performed population genetics analyses, with I.L. and N.P. providing guidance. E.H., H.M, I.H, and D.R wrote the manuscript with input from all co-authors.
### Competing interests
The authors declare no competing interests.
### Corresponding authors
Correspondence to Éadaoin Harney or Hila May. | 2019-03-22 06:30:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.550973117351532, "perplexity": 8674.391521331807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202635.43/warc/CC-MAIN-20190322054710-20190322080710-00398.warc.gz"} |
https://gluon-cv.mxnet.io/build/examples_datasets/recordio.html | # Prepare your dataset in ImageRecord format¶
Raw images are natural data format for computer vision tasks. However, when loading data from image files for training, disk IO might be a bottleneck.
For instance, when training a ResNet50 model with ImageNet on an AWS p3.16xlarge instance, The parallel training on 8 GPUs makes it so fast, with which even reading images from ramdisk can’t catch up.
To boost the performance on top-configured platform, we suggest users to train with MXNet’s ImageRecord format.
## Preparation¶
It is as simple as a few lines of code to create ImageRecord file for your own images.
Assuming we have a folder ./example, in which images are places in different subfolders representing classes:
./example/class_A/1.jpg
./example/class_A/2.jpg
./example/class_A/3.jpg
./example/class_B/4.jpg
./example/class_B/5.jpg
./example/class_B/6.jpg
./example/class_C/100.jpg
./example/class_C/1024.jpg
./example/class_D/65535.jpg
./example/class_D/0.jpg
...
First, we need to generate a .lst file, i.e. a list of these images containing label and filename information.
python im2rec.py ./example_rec ./example/ --recursive --list --num-thread 8
After the execution, you may find a file ./example_rec.lst generated. With this file, the next step is:
python im2rec.py ./example_rec ./example/ --recursive --pass-through --pack-label --num-thread 8
It gives you two more files: example_rec.idx and example_rec.rec. Now, you can use them to train!
For validation set, we usually don’t shuffle the order of images, thus the corresponding command would be
python im2rec.py ./example_rec_val ./example_val --recursive --list --num-thread 8
python im2rec.py ./example_rec_val ./example_val --recursive --pass-through --pack-label --no-shuffle --num-thread 8
## ImageRecord file for ImageNet¶
As mentioned previously, ImageNet training can benefit from the improved IO speed with ImageRecord format.
First, please download the helper script imagenet.py validation image info imagenet_val_maps.pklz. Make sure to put them in the same directory.
Assuming the tar files are saved in folder ~/ILSVRC2012. We can use the following command to prepare the dataset automatically.
python imagenet.py --download-dir ~/ILSVRC2012 --with-rec
Note
Extracting the images may take a while. For example, it takes about 30min on an AWS EC2 instance with EBS.
By default imagenet.py will extract the images into ~/.mxnet/datasets/imagenet. You can specify a different target folder by setting --target-dir.
The prepared dataset can be loaded with utility class mxnet.io.ImageRecordIter directly. Here is an example that randomly reads 128 images each time and performs randomized resizing and cropping.
import os
from mxnet import nd
from mxnet.io import ImageRecordIter
rec_path = os.path.expanduser('~/.mxnet/datasets/imagenet/rec/')
# You need to specify root for ImageNet if you extracted the images into
# a different folder
train_data = ImageRecordIter(
path_imgrec = os.path.join(rec_path, 'train.rec'),
path_imgidx = os.path.join(rec_path, 'train.idx'),
data_shape = (3, 224, 224),
batch_size = 32,
shuffle = True
)
for batch in train_data:
print(batch.data[0].shape, batch.label[0].shape)
break
Out:
(32, 3, 224, 224) (32,)
Plot some validation images
from gluoncv.utils import viz
val_data = ImageRecordIter(
path_imgrec = os.path.join(rec_path, 'val.rec'),
path_imgidx = os.path.join(rec_path, 'val.idx'),
data_shape = (3, 224, 224),
batch_size = 32,
shuffle = False
)
for batch in val_data:
viz.plot_image(nd.transpose(batch.data[0][12], (1, 2, 0)))
viz.plot_image(nd.transpose(batch.data[0][21], (1, 2, 0)))
break
Total running time of the script: ( 0 minutes 10.226 seconds)
Gallery generated by Sphinx-Gallery | 2019-03-22 05:48:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30699941515922546, "perplexity": 14628.552147592476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202635.43/warc/CC-MAIN-20190322054710-20190322080710-00259.warc.gz"} |
https://www.lpsm.paris/mathdoc/preprints/delat.Wed_May_19_09_47_52_EDT_1999.html | Université Paris 6Pierre et Marie Curie Université Paris 7Denis Diderot CNRS U.M.R. 7599 Probabilités et Modèles Aléatoires''
### Mixed Gaussian white noise
Auteur(s):
Code(s) de Classification MSC:
• 62C20 Minimax procedures
• 62G07 Curve estimation (nonparametric regression, density estimation, etc.)
Résumé: We study the problem of estimating a signal $f$ from noisy data under squared-error loss. We assume that $f$ belongs to a certain Sobolev class. The noise process is represented by $t \rightarrow \frac{1}{\sqrt{n}}\int_0^t \sqrt{V_s}dW_s$, where $V$ is a random process independent of the driving Brownian motion $W$. Thus, conditional on $V$, the function $f$ is observed with Gaussian white noise. This setup generalizes the traditional `ideal signal $+$ noise' framework adopted in nonparametric estimation. We establish upper and lower bounds for the asymptotic minimax risk (as $n \rightarrow \infty$) up to constants. We show in particular that the bound of the Pinsker estimator, which is optimal in the case of a deterministic $V$, can be strictly improved if the law of $V$ is known and non degenerate. We characterize the influence of the law of $V$ on the optimal constants and construct asymptotically efficient estimators. We present some statistical models which lie in the scope of this new estimation procedure.
Mots Clés: Gaussian white noise ; mixed normality ; nonparametric $L_2$ efficiency ; Pinsker bound ; linear filtering ; minimax estimation ; Sobolev ellipsoids
Date: 1999-05-19
Prépublication numéro: PMA-504 | 2018-01-21 02:34:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6520981788635254, "perplexity": 948.2554244033823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889917.49/warc/CC-MAIN-20180121021136-20180121041136-00541.warc.gz"} |
http://en.wikipedia.org/wiki/Morphism_of_varieties | # Morphism of varieties
In algebraic geometry, a regular map between affine varieties is a mapping which is given by polynomials. To be explicit, suppose X and Y are subvarieties (or algebraic subsets) of An and Am respectively. A regular map f from X to Y has the form $f = (f_1, \dots, f_m)$ where the $f_i$ are in the coordinate ring $k[x_1, \dots, x_n]/I$, I the ideal defining X, so that the image $f(X)$ lies in Y; i.e., satisfying the defining equations of Y.[1]
More generally, a map ƒ:XY between two varieties is regular at a point x if there is a neighbourhood U of x and a neighbourhood V of ƒ(x) such that the restricted function ƒ:UV is regular as a function on the coordinate patches of U and V. Then ƒ is called regular, if it is regular at all points of X.
In the particular case that Y equals A1 the map ƒ:XA1 is called a regular function, and correspond to scalar functions in differential geometry. In other words, a scalar function is regular at a point x if, in a neighborhood of x, it is a rational function (i.e., a fraction of polynomials) such that the denominator does not vanish at x.[2] The ring of regular functions (that is the coordinate ring or more abstractly the ring of global sections of the structure sheaf) is a fundamental object in affine algebraic geometry. The only regular function on a connected projective variety is constant (this can be viewed as an algebraic analogue of Liouville's theorem in complex analysis); thus, in the projective case, one usually considers the global sections of a line bundle (or divisor) instead.
Regular maps are, by definition, morphisms in the category of algebraic varieties. In particular, a regular map between affine varieties corresponds contravariantly in one-to-one to a ring homomorphism between the coordinate rings.
## Isomorphism
A regular map whose inverse is also regular is called biregular, and are isomorphisms in the category of algebraic varieties. A morphism between algebraic varieties that is a homeomorphism between the underlying topological spaces need not be an isomorphism (a counterexample is given by a Frobenius morphism $t \mapsto t^p$.) On the other hand, if f is bijective birational and the target space of f is a normal variety, then f is biregular. (cf. Zariski's main theorem.)
## Official definition
An (abstract) algebraic variety is defined to be a particular kind of a locally ringed space (see for example projective variety for a ringed structure of a projective variety). When this definition is used, a morphism of varieties is a morphism of the locally ringed spaces underlying the varieties (so for example it is continuous by definition).
## Relation to rational functions
Taking the function field k(V) of an irreducible algebraic curve V, the functions F in the function field may all be realised as morphisms from V to the projective line over k. The image will either be a single point, or the whole projective line (this is a consequence of the completeness of projective varieties). That is, unless F is actually constant, we have to attribute to F the value ∞ at some points of V. Now in some sense F is no worse behaved at those points than anywhere else: ∞ is just the chosen point at infinity on the projective line, and by using a Möbius transformation we can move it anywhere we wish. But it is in some way inadequate to the needs of geometry to use only the affine line as target for functions, since we shall end up only with constants.
Because regular and biregular are very restrictive conditions – there are no non-constant regular functions on projective space – the weaker condition of a rational map and birational maps are frequently used as well.
## Properties
A morphism between varieties is continuous with respect to Zariski topologies on the source and the target.
If f is a morphism between varieties, then the image of f contains an open dense subset of its closure. (cf. constructible set.)
On a normal variety, a rational function is regular if and only if it has no poles of codimension one.[3] This is an algebraic analog of Hartogs' extension theorem. There is also a relative version of this fact; see [1].
A regular map between complex algebraic varieties is a holomorphic map. (There is actually a slight technical difference: a regular map is a meromorphic map whose singular points are removable, but the distinction is usually ignored in practice.) In particular, a regular map into the complex numbers is just a usual holomorphic function (complex-analytic function).
## Fibers of a morphism
The important fact is:[4]
Theorem — Let f: XY be a dominating (i.e., having dense image) morphism of algebraic varieties, and let r = dim X - dim Y. Then
1. For every irreducible closed subset W of Y and every irreducible component Z of f-1(W) dominating W,
$\dim Z \ge \dim W + r.$
2. There exists a nonempty open subset U in Y such that (a) $U \subset f(X)$ and (b) for every irreducible closed subset W of Y intersecting U and every irreducible component Z of f-1(W) intersecting f-1(U),
$\dim Z = \dim W + r.$
Corollary — Let f: XY be a morphism of algebraic varieties. For each x in X, define
$e(x) = \max \{ \dim Z | Z$ an irreducible component of $f^{-1}(f(x))$ containing $x \}.$
Then e is upper-semicontinuous; i.e., for each integer n, the set
$X_n = \{ x \in X | e(x) \ge n \}$
is closed.
Corollary (Chevalley)[5] — Let f: XY be a morphism of algebraic varieties. For each integer n, let
$C_n = \{ y \in Y | \dim f^{-1}(y) = n \}.$
Then $C_n$ are constructible and $C_r$ contains an open dense subset of Y.
In Mumford's red book, the theorem is proved by means of Noether's normalization lemma. For an algebraic approach where the generic freeness plays a main role and the notion of "universally catenary ring" is a key in the proof, see Eisenbud, Ch. 14 of "Commutative algebra with a view toward algebraic geometry." In fact, the proof there shows that if f is flat, then the dimension equality in 2. of the theorem holds in general (not just generically).
## Degree of a finite morphism
Let f: XY be a finite morphism between algebraic varieties over a field k. Then the degree of f is the degree of the finite field extension of the function field k(X) over f*k(Y). By generic freeness, there is some nonempty open subset U in Y such that the restriction of the structure sheaf OX to f−1(U) is free as OY|U-module. The degree of f is then the rank of this free module.
If f is étale and if X, Y are complete, then for any coherent sheaf F on Y, writing χ for the Euler characteristic,
$\chi(f^* F) = \deg(f) \chi (F).$[6]
(The Riemann–Hurwitz formula for a ramified covering shows the "étale" here cannot be omitted.)
If f is étale and k is algebraically closed, then each geometric fiber f−1(y) consists exactly of deg(f) points. | 2015-05-28 00:09:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9664214849472046, "perplexity": 300.6905138690645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929176.79/warc/CC-MAIN-20150521113209-00102-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://programs.wiki/wiki/learning-notes-long-chain-subdivision.html | # [learning notes] long chain subdivision
You have to learn what you don't understand!
## brief introduction
The difference between it and light chain dissection is that the definition of heavy son has changed from the largest son in the subtree to the deepest son in the subtree.
So we can know that it is mainly used to solve problems related to depth. It is widely used in optimization \ (dp \), but it is very flexible, so we must practice.
## nature
Nature 1
The length of \ (o) and (n) of all chains.
Property 2
The length of the long chain of \ (k \) secondary ancestor \ (y \) of any point is greater than or equal to \ (k \)
Property 3
The number of times any point jumps over the heavy chain will not exceed \ (\ sqrt n \) times
Don't bother to prove that the above properties are more obvious than one \ (... \)
## application
### 1, Calculate the k-th ancestor
This must be highly recommended This giant The solution to the problem has made me understand!
Preprocess these things first:
• Split the long chain of the tree and record the chain head and depth of each point, \ (O(n) \)
• Multiply the \ (2^n \) secondary ancestors of each point, \ (O(n\log n) \)
• If the length of a chain is \ (len \), record the up \ (len \) ancestors and down \ (len \) chain elements of the chain head, \ (O(n) \)
• Record the binary highest bit of each number \ (1 \), \ (O(n) \)
The algorithm process is as follows:
• First jump the highest bit of \ (K \) by using the multiplication array, and set the remaining steps as \ (k '\), then the number of steps jumped by \ (k' < \ frac{k}{2} < \)
• According to the conclusion: if the length of the long chain of the k-th ancestor y of any point is greater than or equal to K, the length of the long chain of the current point must be \ (\ geq \) the number of steps \ (> k '\), and then the \ (k \) ancestor can be obtained with the preprocessed up or down array \ (O(1) \).
The complexity bottleneck is preprocessing \ (O(n\log n) \), but only \ (O(1) \) is required for a single query
### 2, Optimized dp
In combination with this example: Hotels
First consider the positional relationship of these three points. We consider that the answer may be like this. The answers of the questions on the tree can be counted at \ (lca \):
Then use \ (dp \) to count these situations. Let \ (f(i,j) \) represent the number of \ (j \) points with depth within \ (i \) subtree, and \ (g(i,j) \) represent the number of unordered number pairs \ ((i,j) \) satisfying \ (d(lca(x,y),x)=d(lca(x,y),y)=d(lca(x,y),i)+j \) within \ (i \) subtree. Then the answer is as follows:
• $$ans\leftarrow g(i,0)$$, corresponding to the second case.
• $$ans\leftarrow \sum_{x\not=y} f(x,j-1)g(y,j+1)$$
I think it should be transferred in this way. According to the definition:
• $$g(i,j)\leftarrow \sum_{x<y} f(x,j-1)f(y,j-1)$$
• $$g(i,j)\leftarrow\sum g(x,j+1)$$
• $$f(i,j)\leftarrow\sum f(i,j-1)$$
Violence transfer is \ (O(n^2) \), but it is found that the subscripts are only related to depth, so it can be optimized by long chain partition. Complexity proves to be very ingenious. Since we directly inherit the \ (dp \) value of heavy son and join light son violently, the complexity consumption is the depth of light chain. Since each chain will be added only once, according to the nature \ (1 \) total chain length \ (O(n) \), then the time complexity \ (O(n) \)
A difficulty in writing long chain subdivision is to inherit the information of the heavy son. It needs to be maintained with a pointer. Just divide a large array with a pointer. In order to prevent \ (\ tt RE \), you can arrange more space.
But there's a little thing I haven't learned yet!
#include <cstdio>
const int M = 100005;
#define int long long
{
int x=0,f=1;char c;
while((c=getchar())<'0' || c>'9') {if(c=='-') f=-1;}
while(c>='0' && c<='9') {x=(x<<3)+(x<<1)+(c^48);c=getchar();}
return x*f;
}
int n,tot,F[M],d[M],dep[M],son[M];
int *f[M],*g[M],p[4*M],*o=p,ans;
struct edge
{
int v,next;
edge(int V=0,int N=0) : v(V) , next(N) {}
}e[2*M];
void pre(int u,int fa)
{
d[u]=d[fa]+1;
for(int i=F[u];i;i=e[i].next)
{
int v=e[i].v;
if(v==fa) continue;
pre(v,u);
if(dep[v]>dep[son[u]]) son[u]=v;
}
dep[u]=dep[son[u]]+1;
}
void dfs(int u,int fa)
{
{
f[son[u]]=f[u]+1,g[son[u]]=g[u]-1;
dfs(son[u],u);
}
f[u][0]=1;
ans+=g[u][0];//Didn't learn to understand
for(int i=F[u];i;i=e[i].next)
{
int v=e[i].v;
if(v==fa || v==son[u]) continue;
f[v]=o;o+=dep[v]*2;g[v]=o;o+=dep[v]*2;
dfs(v,u);
for(int j=0;j<dep[v];j++)
{
if(j) ans+=f[u][j-1]*g[v][j];
ans+=g[u][j+1]*f[v][j];
}
for(int j=0;j<dep[v];j++)
{
g[u][j+1]+=f[v][j]*f[u][j+1];
if(j) g[u][j-1]+=g[v][j];
f[u][j+1]+=f[v][j];
}
}
}
signed main()
{
for(int i=1;i<n;i++)
{
e[++tot]=edge(v,F[u]),F[u]=tot;
e[++tot]=edge(u,F[v]),F[v]=tot;
}
pre(1,0);
f[1]=o;o+=dep[1]*2;g[1]=o;o+=dep[1]*2;
dfs(1,0);
printf("%lld\n",ans);
}
### 3, Strange applications?
Read it again when you have time, \ (yyb \) on the blog
https://www.cnblogs.com/cjyyb/p/9479258.html
Posted by trailerparkboy on Sat, 16 Apr 2022 15:29:24 +0930 | 2022-10-04 22:50:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8067325949668884, "perplexity": 3308.5886792749793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00413.warc.gz"} |
http://www.thejach.com/ | # Automated anonymous surveying
Jonathan Blow was recently quoted in media as saying: "...piracy rates for PC games are often 85-90 percent. That's true. If 10 percent of people who pirate games would buy the games, that would double profits. Double! That's insane. That's the difference between starving to death and being comfortable enough to make the next game." This bugged me for a few reasons, and this from someone who never pirates games.
First check: does the math make sense? (Skip to the last parenthetical, it sort of does.) If you sell your game for $10, and get 100 customers, you've made$1000. But if the piracy rate means that if you track the count of legit users and track the count of pirate users (assuming none overlap, I'll get to that) you should see around 85-90 pirate users per 100 legit users. In other words, another $850-$900 in missing sales. If just 10 percent of those 85-90, 8.5-9, we'll round to 9, bought the game, that would result in an increase in sales by $90, bringing the total to$1090. This is nowhere near "double" revenue, but can it be double profit? Maybe I'm misunderstanding what he means by his whole remark -- perhaps he means for his game in particular? But he hasn't made a profit yet, so that seems doubtful. The only way the statement could be true is if the game cost $910 to make. If that is true, then at 100 sales, you've made$90. And if 10% of the pirate users paid, you've made another $90, doubling your profits. But this doesn't hold for any further periods of time. If after the game has been around for a while, you have made 1000 sales total (and there are now 900 pirates), you have made$10,000 in total sales, and a total profit of $8,090. Now assume 10% of those pirates now pay, or 90 users, that would net you an additional$900 in profit. This is far short of double profit. So his statement makes no sense mathematically, at least to me. (Okay, let's try one more time... Let's suppose that a 90% piracy rate means that if there are 100 copies of a game out there, 90 of them are pirated, and only 10 of them are legit individual sales. Look at 1000 copies out there, only 100 legit, total sales is thus $1000, let's say the game cost$100 to make, so profit is $900. If 90/900 pirates bought, that's an extra$900, so double profit. As you increase the number of copies, or take the cost-to-create to \$0, the limit is actually 1.9 though, not strictly double. I assume this is what was meant.)
Second check: you're ignoring the possibility that 10% of people who pirate games haven't also already bought your game, before or after pirating. If this possibility is true, and if we also assume the remark is true (in whatever way), then if you waved a magic wand to suddenly get rid of piracy, your profits could halve!
# Better world proposals as admissible heuristics
In CS, and graph searching in particular, the concept of an admissible heuristic is one that never overestimates the true cost of something. (Typically achieving some goal.) Different heuristics may give different estimates, but admissible ones never overestimate.
The goal of utopia is a perfect world, or at least a perfect-as-possible world. Dystopian fiction contains great fictional examples of how some would-be utopias aren't actually all that great. But I contend that a lot of those dystopias are still actually better than the present world, overall, and that reaching the perfect world may require such stepping stones. I worry that dystopias can represent local optima and thus be worse in the sense of cutting off the possibility for improvement, but I'm not sure that's possible on a global scale for all time.
Thus it's important to remember that proposals to make this world better, or ideas and visions of what possible future worlds might be like -- say an ill-defined World Without Suffering -- aren't proposing the ultimate perfect utopia, but merely improvements. And if they are better overall, and don't try to pretend to be perfect and final, then we can consider them admissible... To take the previous example, perhaps a certain amount of suffering is needed for human existence to have meaning. However the world is currently full of much suffering that I think we would be better off without, and once we are without, then perhaps we can reason on a further improvement to introduce the right amount back in, which would be another overall improvement on the path to perfection and thus admissible too.
# Nim project
Finished up this fun little side project introducing myself to Nim (and SDL2 in the process): https://github.com/Jach/dodgeball_nim_pygame_comparison
# In favor of privacy, but not as a right
I don't really believe in "rights". I believe in assurances granted by others, and when those others happen to be governments, whether it's a "right" or a "law" matters little to me. But other conceptions of "rights", don't buy it. If you try to argue some rights are objective, or even self-evident, I don't buy it even harder.
I still think many (though not all) the things I supposedly have rights to are nice to have, though, but not for the circular reason that rights are good.
When it comes to privacy, I generally fall into the "none of your/my damn business". There are many things I or you simply don't need to know, and I'll get ticked if you start trying to learn those things, and I'll understand if you get ticked in the other direction. For instance, say you're visiting my blog, and my blog asks for your browser to share your location (which may be from a phone, and thus very accurate). This is none of my damn business, I'm not trying to serve you software that makes use of mapping, something whose business legitimately is interested in your location.
# Some questions on Star Wars
Warning, spoilers below.
I finally saw the Star Wars movie yesterday. I liked it while watching, and I'd watch it again, though on reflection there are some grievances, or just questions I had while watching or after watching that it'd be nice to have answers for... I'll probably research some after I post this. So break out the pizza rolls, it helps if you read everything below in that voice.
Why are there so many tiny kids in the audience? This is a PG-13 movie. Did their parents not see Episode 3, or are they comfortable with the possibility of their kids seeing on-screen amputation and dead children and flesh-burning? Maybe they just trust Disney is a family friendly company like Nintendo and would never be too violent...
# Some (Updated) Beliefs
Years ago I wrote this, expressing without too much elaboration or reasoning several beliefs of mine in various categories. Needless to say, some have changed, and this gives me an outlet to write a little about what I haven't been writing about. So, following the original categorization (with a few new categories), here are some of my current beliefs. If I don't address an old one, conclude it hasn't really changed. Please keep in mind most of these are "academic level" beliefs and thus I'm not super attached to them, for clarity on that (and maybe some updated beliefs if this post is old) see here.
### Religion
I stand by my original belief in 2009. The only thing I might add is that I think the rituals employed by religion can be useful, ritual itself is important -- see these. | 2016-02-11 12:50:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38192248344421387, "perplexity": 1426.2133569069447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701161946.96/warc/CC-MAIN-20160205193921-00174-ip-10-236-182-209.ec2.internal.warc.gz"} |
https://zbmath.org/?q=an:1158.28302 | # zbMATH — the first resource for mathematics
Asymptotic properties of finite dimensional conditional distributions of spherically symmetric measures on a locally convex space. (English, Russian) Zbl 1158.28302
Russ. Math. 49, No. 3, 67-74 (2005); translation from Izv. Vyssh. Uchebn. Zaved., Mat. 2005, No. 3, 71-78 (2005).
From the introduction: We consider finite dimensional projections of spherically symmetric measures on a locally convex space. We construct these projections as combined functions of distributions of finite systems of measurable linear functionals belonging to an orthonormal basis of the reproducing Hilbert space of the Gaussian measure which generates the given spherically symmetric measure. Then, for any projection (a finite system of basis functionals) we consider the corresponding conditional distribution of a fixed subsystem of functionals with respect to the other functionals of the same system. We prove that any such distribution almost sure converges to the Gaussian distribution as the dimension tends to infinity. Also we establish relationship of the obtained results with the logarithmic derivatives of spherically symmetric measures.
##### MSC:
28C20 Set functions and measures and integrals in infinite-dimensional spaces (Wiener measure, Gaussian measure, etc.) | 2022-01-24 18:02:44 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8083776235580444, "perplexity": 374.08436111458246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304572.73/warc/CC-MAIN-20220124155118-20220124185118-00575.warc.gz"} |
http://physicshelpforum.com/kinematics-dynamics/1884-vertical-horizontal-distance-problem.html | Physics Help Forum Vertical/Horizontal distance problem
Kinematics and Dynamics Kinematics and Dynamics Physics Help Forum
Apr 5th 2009, 07:10 PM #1 Senior Member Join Date: Mar 2009 Posts: 129 Vertical/Horizontal distance problem "A balloon is rising at a vertical velocity of 4.9m/s. At the same time, it is drifting horizontally with a velocity of 1.6m/s. If a bottle is released from the balloon when it is 9.8m above the ground, determine (a) the time it takes for the bottle to reach the ground, and (b) the horizontal displacement of the bottle from the balloon." Here's what I attempted: /\d vertical = -4.9 m/s^2*(/\t) -9.8m = -4.9 m/s^2*(/\t) 2s^2 = (/\t)^2 /\t = 1.41s /\d horizontal = 1.6m/s (1.41s) /\d horizontal = 2.26m Here are the answers from the back of the book: a) 2s b) 0 PLEASE HELP!!! Thanks in advance! Last edited by s3a; Apr 5th 2009 at 07:18 PM.
Apr 8th 2009, 06:30 PM #2
Senior Member
Join Date: Aug 2008
Posts: 113
Originally Posted by s3a "A balloon is rising at a vertical velocity of 4.9m/s. At the same time, it is drifting horizontally with a velocity of 1.6m/s. If a bottle is released from the balloon when it is 9.8m above the ground, determine (a) the time it takes for the bottle to reach the ground, and (b) the horizontal displacement of the bottle from the balloon." Here's what I attempted: /\d vertical = -4.9 m/s^2*(/\t) -9.8m = -4.9 m/s^2*(/\t) 2s^2 = (/\t)^2 /\t = 1.41s /\d horizontal = 1.6m/s (1.41s) /\d horizontal = 2.26m Here are the answers from the back of the book: a) 2s b) 0 PLEASE HELP!!! Thanks in advance!
$\displaystyle \Delta y = v_{oy}t - \frac{1}{2}gt^2$
$\displaystyle -9.8 = 4.9t - 4.9t^2$
$\displaystyle -2 = t - t^2$
$\displaystyle t^2 - t - 2 = 0$
$\displaystyle (t - 2)(t+1) = 0$
$\displaystyle t = 2$ sec
as the bottle falls, the balloon stays directly over it ... both have the same horizontal velocity. therefore, horizontal displacement relative to the balloon is 0.
Thread Tools Display Modes Linear Mode
Similar Physics Forum Discussions Thread Thread Starter Forum Replies Last Post boubba_25 Kinematics and Dynamics 2 Apr 1st 2011 06:02 PM jrodd321 Kinematics and Dynamics 1 Oct 24th 2010 09:29 PM s3a Kinematics and Dynamics 2 Apr 26th 2009 12:34 AM angelica Kinematics and Dynamics 0 Feb 7th 2009 02:44 PM chino109 Kinematics and Dynamics 3 Jan 30th 2009 08:23 PM | 2018-10-21 01:54:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5842801928520203, "perplexity": 3080.981127203046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513548.72/warc/CC-MAIN-20181021010654-20181021032154-00548.warc.gz"} |
https://homework.cpm.org/category/ACC/textbook/ccaa8/chapter/10%20Unit%2011/lesson/CCA:%2010.3.2/problem/10-127 | ### Home > CCAA8 > Chapter 10 Unit 11 > Lesson CCA: 10.3.2 > Problem10-127
10-127.
Solve algebraically to find all points where the graphs of $y=x^2−3x+2$ and $y=2x+8$ intersect.
Use the Equal Values Method. Note that since you have an $x^2$-term, you should be looking for two answers.
Substitute your answers, one at a time, back into one of the equations, and solve for the corresponding $y$-coordinates.
$(−1,6)$ and $(6,20)$ | 2020-05-27 05:46:08 | {"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5492103099822998, "perplexity": 879.7406589192859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392141.7/warc/CC-MAIN-20200527044512-20200527074512-00226.warc.gz"} |
https://stackoverflow.com/questions/18194025/perform-search-on-whole-directory-in-sublime-text-2/30496837 | # Perform Search on Whole Directory in Sublime Text 2?
Is there any directory-wide search functionality in Sublime for the directory currently opened in the editor?
Or optionally a search all opened files? (If this exists do the files have to be opened in a tab or just visible on the sidebar?)
Yes there is.
On Windows
CTRL + SHIFT + F
On Macintosh
CMD + SHIFT + F
The Where field in the search panel determines where to search. You can define the scope of the search in several ways.
• I have Sublime Text 3. Ctrl+Shift+F didn't work for me, but I found this option in the menu: Find -> "Find in Files..." Oct 5 '14 at 3:47
• Do you have a folder open? I have Sublime Text 3 and it's working here (on a Mac).
– Will
Oct 21 '14 at 1:11
In Sublime Text 3
Right click on FOLDERS Navigation bar
Choose Find in Folder
• I had to remove "Sublime Sidebar Enhancements" plugin to see "Find in folder..." again. It's strange that it removes this functionality... Jan 28 '16 at 17:01
*/folder_name/*
• In the "Where" section of the find-all dialogue (CtrlShift+F or Shift+F ), */folder_name/* will search folders called "folder_name" that are represented in your current session. For instance, if you have a file open with a path of C:\Users\joe\folder_name\file.js, you can use the *//* pattern to search any of those folders or combinations of folders: */joe/* and */Users/joe/* will both work. However, if you have a file like this C:\Users\timmy\folder_name\file.js that's not open, it won't search that (unless you explicitly name it, like in the next example).
C:\path\to\folder
• You can also put in the absolute path to the folder you want to search. This is useful if you want to search a folder that is not represented in sublime (no files within that folder are currently open in sublime), or if you have two dirs with the same name, and you only want to search one. Personally, I never use this.
C:\path\to\folder, */folder_name/*
• You can also combine them.
To answer your last question, at some point Sublime started automatically searching all open files and represented folders, but if you want to be sure you can use one or all of these variables:
<project>,<current file>,<open files>,<open folders>
You can read more about searching at the unofficial sublime documentation. Or from this post, which is similar to your own. | 2021-10-17 11:05:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4667404592037201, "perplexity": 2651.8199888241666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00589.warc.gz"} |
https://basepub.dauphine.fr/handle/123456789/17014?show=full | dc.contributor.author Denoyelle, Quentin dc.contributor.author Duval, Vincent dc.contributor.author Peyré, Gabriel dc.date.accessioned 2017-11-22T12:06:11Z dc.date.available 2017-11-22T12:06:11Z dc.date.issued 2016 dc.identifier.issn 1069-5869 dc.identifier.uri https://basepub.dauphine.fr/handle/123456789/17014 dc.language.iso en en dc.subject Radon measure dc.subject Sparse Signal Processing dc.subject Super-resolution dc.subject Sparsity dc.subject Deconvolution dc.subject Convex optimization dc.subject LASSO dc.subject BLASSO dc.subject.ddc 621.3 en dc.title Support Recovery for Sparse Super-Resolution of Positive Measures dc.type Article accepté pour publication ou publié dc.description.abstracten We study sparse spikes super-resolution over the space of Radon measures on $$\mathbb {R}$$ or $$\mathbb {T}$$ when the input measure is a finite sum of positive Dirac masses using the BLASSO convex program. We focus on the recovery properties of the support and the amplitudes of the initial measure in the presence of noise as a function of the minimum separation t of the input measure (the minimum distance between two spikes). We show that when $${w}/\lambda$$, $${w}/t^{2N-1}$$ and $$\lambda /t^{2N-1}$$ are small enough (where $$\lambda$$ is the regularization parameter, w the noise and N the number of spikes), which corresponds roughly to a sufficient signal-to-noise ratio and a noise level small enough with respect to the minimum separation, there exists a unique solution to the BLASSO program with exactly the same number of spikes as the original measure. We show that the amplitudes and positions of the spikes of the solution both converge toward those of the input measure when the noise and the regularization parameter drops to zero faster than $$t^{2N-1}$$. dc.relation.isversionofjnlname Journal of Fourier Analysis and Applications dc.relation.isversionofjnlvol 23 dc.relation.isversionofjnlissue 5 dc.relation.isversionofjnldate 2016 dc.relation.isversionofjnlpages 1153–1194 dc.relation.isversionofdoi 10.1007/s00041-016-9502-x dc.subject.ddclabel Traitement du signal en dc.relation.forthcoming non en dc.relation.forthcomingprint non en dc.description.ssrncandidate non dc.description.halcandidate non dc.description.readership recherche dc.description.audience International dc.relation.Isversionofjnlpeerreviewed oui dc.date.updated 2017-12-19T09:54:16Z hal.person.labIds 60 hal.person.labIds 60 hal.person.labIds 60
| 2020-08-06 22:08:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8190343976020813, "perplexity": 3692.99445858856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737039.58/warc/CC-MAIN-20200806210649-20200807000649-00091.warc.gz"} |
http://fluidsengineering.asmedigitalcollection.asme.org/article.aspx?articleid=1433948 | 0
Research Papers: Fundamental Issues and Canonical Flows
# Particle Image Velocimetry Study of Rough-Wall Turbulent Flows in Favorable Pressure Gradient
[+] Author and Article Information
G. F. K. Tay, M. F. Tachie
Department of Mechanical and Manufacturing Engineering, University of Manitoba, Winnipeg, MB, R3T 5V6, Canada
D. C. S. Kuhn1
Department of Mechanical and Manufacturing Engineering, University of Manitoba, Winnipeg, MB, R3T 5V6, Canadadkuhn@cc.umanitoba.ca
1
Corresponding author.
J. Fluids Eng 131(6), 061205 (May 15, 2009) (12 pages) doi:10.1115/1.3112389 History: Received September 04, 2008; Revised February 08, 2009; Published May 15, 2009
## Abstract
This paper reports an experimental investigation of the effects of wall roughness and favorable pressure gradient on low Reynolds number turbulent flow in a two-dimensional asymmetric converging channel. Flow convergence was produced by means of ramps (of angles 2 deg and 3 deg) installed on the bottom wall of a plane channel. The experiments were conducted over a smooth surface and over transitionally rough and fully rough surfaces produced from sand grains and gravel of nominal mean diameters 1.55 mm and 4.22 mm, respectively. The dimensionless acceleration parameter was varied from $0.38×10−6$ to $3.93×10−6$ while the Reynolds number based on the boundary layer momentum thickness was varied from 290 to 2250. The velocity measurements were made using a particle image velocimetry technique. From these measurements, the distributions of the mean velocity and Reynolds stresses were obtained to document the salient features of transitionally and fully rough low Reynolds number turbulent boundary layers subjected to favorable pressure gradient.
<>
## Figures
Figure 10
Effects of surface roughness on the mean velocity defect, turbulent intensities, and the Reynolds shear stress. The vertical lines in (b), (d), (f), and (h) correspond to the edge of the roughness sublayer.
Figure 11
Effects of surface roughness on the stress ratios. (a) ρuv=−uv/(u2v2)0.5, (b) v2/u2, (c) −uv/u2, and (d) −uv/v2. The symbols are as in Fig. 1.
Figure 12
Effects of combined FPT and surface roughness on the mean velocity, turbulent intensities, and the Reynolds shear stress
Figure 13
Effects of combined FPG and surface roughness on the stress ratios. The symbols are as in Fig. 1.
Figure 1
Schematic of the test section: (a) test channel showing the converging section and the three measurement planes where data were acquired; (b) a three-dimensional view of the ramp used to produce the converging section; W=179 mm is the internal width of the test channel, and α=2 deg or 3 deg Is the angle of the ramp. In (a), P denotes measurement plane and L defines the exact x-location where profiles were extracted in a given plane.
Figure 9
Effects of FPG on the turbulent intensities and Reynolds shear stress normalized by (Uτ,h∗), and correlation coefficient over the smooth surface ((a), (c), (e), and (g)) and sand grain roughness ((b), (d), (f), and (h))
Figure 2
Profiles of boundary layer parameters. Symbols: ○, SMα2U0.25; ●, SMα2U0.50; ◑, SMα3U0.25; ⊕, SMα3U0.50; ◻, SGα2U0.25; ◼, SGα2U0.50; ◨, SGα3U0.25, ⊞, SGα3U0.50; △, GVα2U0.25; ▲, GVα2U0.50; ◮, GVα3U0.25; and △+, GVα3U0.50. Lines are for visual aid only.
Figure 3
Mean velocity profiles over the smooth- and rough-walls in the inner coordinates
Figure 4
Distributions of the mean velocity and mean velocity defect over the smooth-wall and the sand grain roughness. The numbers in parentheses correspond to the value of Reθ for the particular test conditions.
Figure 5
Turbulent intensities and Reynolds shear stress over the smooth-wall and sand grain
Figure 6
Distributions of the Reynolds stresses and stress ratios over the smooth surface compared with DNS from Ref. 31
Figure 7
Effects of FPG on the distributions of the mean velocity and the mean velocity defect over the smooth surface ((a), (c), and (e)) and sand grain roughness ((b), (d), and (f)) in the outer coordinates
Figure 8
Effects of FPG on the distributions of the turbulent intensities and Reynolds shear stress over the smooth surface ((a), (c), and (e)) and sand grain roughness ((b), (d), and (f)) in the outer coordinates
## Errata
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections | 2018-09-20 06:24:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42343080043792725, "perplexity": 5772.032833534257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156418.7/warc/CC-MAIN-20180920062055-20180920082055-00265.warc.gz"} |
https://datascience.stackexchange.com/questions/102195/how-to-inference-of-time-series-with-rnnlike-lstm-gru-etc | # How to inference of time series with RNN(like LSTM, GRU etc)
Say I am doing a time series prediction which predict some value for next time step with past T inputs from historical inputs. Say I am using a RNN module like LSTM or GRU.
In trainning/validation, I fed RNN module with batches of shape (batch_size, T, *) data to train a model.
When inferencing, I can either:
1. Always use past T inputs to get next step prediction, then discard the state of the RNN module. That is: use input from time -T to -1 to get prediction at t=0 (last output of LSTM or GRU module), then discard the final state of the RNN module and use input from time -T+1 to 0 to get prediction at t=1 etc.
2. keep the RNN state, and each time feed only one input to get the prediction. That is: first use input from time -T to -1 to get prediction at t=1 like above. Then keep the current state of the RNN and feed the RNN with input at t=1 to get the prediction at t=2 etc.
Which one is better? Or It depends on specific problems? Thanks | 2021-10-27 03:56:02 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.802901566028595, "perplexity": 2274.481256042778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588053.38/warc/CC-MAIN-20211027022823-20211027052823-00382.warc.gz"} |
http://mathoverflow.net/questions/61933/lattice-reduction-in-r3-r4-or-what-is-fundamental-domain-for-sl3-z-sl4 | Lattice reduction in R^3 (R^4) or what is fundamental domain for SL(3,Z) , (SL(4,Z)) ?
Consider a lattice in R^3. Is the some "canonical" way or ways to choose basis in it ?
I mean in R^2 we can choose a basis |h_1| < |h_2| and |(h_2, h_1)| < 1/2 |h_1|. Considering lattices with fixed determinant and up to unitary transformations we get standard picture of the PSL(2,Z) acting on the upper half plane, which has a fundamental domain Im (tau)>1 Re(tau) <1/2.
What are the similar results for other small dimensions R^3, R^4, C^4, C^8 ? What are the algorithms to find such a lattice reductions ?
-
I actually meant the Gram matrix for a basis of the lattice, so it is both positive definite and symmetric. A change of basis matrix $U \in \textup{GL}_n(\mathbb{Z})$ acts on a Gram matrix $M$ by sending it to $UMU^t$. There are a couple of advantages of using these coordinates. One is that passing to the Gram matrix automatically mods out by the orthogonal group, and the other is that some constraints that are nonlinear in terms of a basis matrix become linear in terms of a Gram matrix. (For example, vector lengths in the lattice are linear functions of the Gram matrix entries.) – Henry Cohn Apr 17 '11 at 12:46 | 2015-07-30 04:23:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8670534491539001, "perplexity": 445.26739215431155}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987127.36/warc/CC-MAIN-20150728002307-00284-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/1462767/how-to-find-pythagoras-triplet-using-the-fibonacci-sequence | # How to find pythagoras triplet using the fibonacci sequence?
I'm using the Fibonacci sequence to generate some Pythagorean triples $(3, 4, 5,$ etc$)$ based off this page:Formulas for generating Pythagorean triples starting at "Generalized Fibonacci Sequence".
For Fibonacci numbers starting with $F_1=0$ and $F_2=1$ and with each succeeding Fibonacci number being the sum of the preceding two, one can generate a sequence of Pythagorean triples starting from $(a_3, b_3, c_3) = (4, 3, 5)$ via $$(a_n, b_n, c_n) = (a_{n-1}+b_{n-1}+c_{n-1}, F_{2n-1}-b_{n-1}, F_{2n})$$
for $n \ge 4$.
I am unable to generate Pythagorean triplet sequence using Fibonacci series.
Kindly Help!!!!!!!!!
You should get something like
n Fib_{2n-1} Fib_{2n} a_n b_n c_n
3 3 5 4 3 5
4 8 13 12 5 13
5 21 34 30 16 34
6 55 89 80 39 89
7 144 233 208 105 233
8 377 610 546 272 610
etc.
• But i have to generate pythagoraes triplet ,, where is the case {6,8,10} etc. @Henry – Aditya Sharma Oct 3 '15 at 18:06
• @AdityaSharma: Try to calculate $a_n^2+b_n^2-c_n^2$ from the values in my table. It generates some Pythagorean triplets, not all of them. – Henry Oct 3 '15 at 18:07
• where is the case {6,8,10},etc.... in your case a_n^2+b_n^2-c_n^2=0 – Aditya Sharma Oct 3 '15 at 18:10
• some not all – Henry Oct 3 '15 at 18:11
• but i want to generate all triplets , how to generate them? – Aditya Sharma Oct 3 '15 at 18:12 | 2019-12-09 21:47:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3311974108219147, "perplexity": 648.585795051445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540523790.58/warc/CC-MAIN-20191209201914-20191209225914-00123.warc.gz"} |
http://www.oxfordmathcenter.com/drupal7/node/605 | # Objects - A First Look
### Objects and their Construction
We will go into much greater detail regarding exactly what a Java "object" is later, but for now -- you can think of an object as something that can both store data and perform various actions.
The data might include information stored in various primitive data types or even other objects.
The actions an object can take are described by the methods associated with that object.
Exactly what type of data is stored and the details of the methods it possesses are defined by the object's class. For now, you can think of a class as a special "type", created by a programmer -- and a blueprint for the construction of corresponding objects.
For example, there is a pre-defined class called JButton that can be used to create buttons like the ones you might see in an application window.
An instance of the JButton class is a single JButton object. JButton objects store data in that they have a height, a width, a position, text on the button, etc... JButton objects have methods that generally take some action, when they are clicked. They may also do something when you hover over them with the mouse (like light up).
A JButton object (and every object, for that matter) must be stored in memory -- and like their primitive-data-type cousins, referencing these objects and the data they contain can be accomplished through the use of variables.
Although not surprisingly given their potential complexity, initialization and assignment for objects works a bit differently than initialization and assignment for primitive data types. For example, suppose we wish to have a variable called myButton refer to a JButton object. Knowing that there is a lot of data associated with a single button. (height, width, position, text on the button, etc..., as just mentioned), what would you put after the equals sign in the code below?
JButton myButton = … ;
There are many assignments that need to be made here -- many actions to be taken. Of course, performing some action (or actions) is what methods were designed to do. In every class (that you can instantiate with an object), there will be a special method called the constructor that does all of the things that need to be done to create a new object of that class.
This constructor method always has the same name as the class, followed (as always) by some parentheses which may or may not include some additional parameters the method might need.
To create the object (that your variable will then reference) you need to use the “new” keyword followed by the call to the constructor method.
So, for example, to create two new JButton objects named myButton1 and myButton2, we would write the following:
JButton myButton1 = new JButton(); \\new JButton with no text
JButton myButton2 = new JButton("OK"); \\new JButton with text "OK"
As with primitive types, you can split the declaration and instantiation/initialization up into two steps:
JButton myButton;
myButton = new JButton(); | 2018-12-14 12:32:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2664092481136322, "perplexity": 1044.1745792550314}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825728.30/warc/CC-MAIN-20181214114739-20181214140239-00077.warc.gz"} |
https://elkement.wordpress.com/tag/dimensional-analysis/ | # On Photovoltaic Generators and Scattering Cross Sections
Subtitle: Dimensional Analysis again.
Our photovoltaic generator has about 5 kW rated ‘peak’ power – 18 panels with 265W each.
South-east oriented part of our generator – 10 panels. The remaining 8 are oriented south-west.
Peak output power is obtained under so-called standard testing condition – 1 kWp (kilo Watt peak) is equivalent to:
• a panel temperature of 25°C (as efficiency depends on temperature)
• an incident angle of sunlight relative to zenith of about 48°C – equivalent to an air mass of 1,5. This determines the spectrum of the electromagnetic radiation.
• an irradiance of solar energy of 1kW per square meter.
Simulated spectra for different air masses (Wikimedia, User Solar Gate). For AM 1 the path of sunlight is shortest and thus absorption is lowest.
The last condition can be rephrased as: We get 1 kW output per kW/minput. 1 kWp is thus defined as:
1 kWp = 1 kW / (1 kW/m2)
Canceling kW, you end up with 1 kWp being equivalent to an area of 1 m2.
Why is this a useful unit?
Solar radiation generates electron-hole pairs in solar cells, operated as photodiodes in reverse bias. Only if the incoming photon has exactly the right energy, solar energy is used efficiently. If the photon is not energetic enough – too ‘red’ – it is lost and converted to heat. If the photon is too blue – too ‘ultraviolet’ – it generates electrical charges, but the greater part of its energy is wasted as the probability of two photons hitting at the same time is rare. Thus commercial solar panels have an efficiency of less than 20% today. (This does not yet say anything about economics as the total incoming energy is ‘free’.)
The less efficient solar panels are, the more of them you need to obtain a certain target output power. A perfect generator would deliver 1 kW output with a size of 1 m2 at standard test conditions. The kWp rating is equivalent to the area of an ideal generator that would generate the same output power, and it helps with evaluating if your rooftop area is large enough.
Our 4,77 kW generator uses 18 panels, about 1,61 m2 each – so 29 m2 in total. Panels’ efficiency is then about 4,77 / 29 = 16,4% – a number you can also find in the datasheet.
There is no rated power comparable to that for solar thermal collectors, so I wonder why the unit has been defined in this way. Speculating wildly: Physicists working on solar cells usually have a background in solid state physics, and the design of the kWp rating is equivalent to a familiar concept: Scattering cross section.
An atom can be modeled as a little oscillator, driven by the incident electromagnetic energy. It re-radiates absorbed energy in all directions. Although this can be fully understood only in quantum mechanical terms, simple classical models are successful in explaining some macroscopic parameters, like the index of refraction. The scattering strength of an atom is expressed as:
[ Power scattered ] / [ Incident power of the beam / m2 ]
… the same sort of ratio as discussed above! Power cancels out and the result is an area, imagined as a ‘cross-section’. The atom acts as if it were an opaque disk of a certain area that ‘cuts out’ a respective part of the incident beam and re-radiates it.
The same concept is used for describing interactions between all kinds of particles (not only photons) – the scattering cross section determines the probability that an interaction will occur:
Particles’ scattering strengths are represented by red disks (area = cross section). The probability of a scattering event going to happen is equal to the ratio of the sum of all red disk areas and the total (blue+red) area. (Wikimedia, User FerdiBf)
# Rowboats, Laser Pulses, and Heat Energy (Boring Title: Dimensional Analysis)
Dimensional analysis means to understand the essentials of a phenomenon in physics and to calculate characteristic numbers – without solving the underlying, often complex, differential equation. The theory of fluid dynamics is full of interesting dimensionless numbers – Reynolds Number is perhaps most famous.
In the previous post on temperature waves I solved the Heat Equation for a very simple case, in order to answer the question How far does solar energy get into ground in a year? Reason: I have been working on simulations of our heat pump system since a few years. This also involves heat transport between the water/ice tank and ground. If you set out to simulate a complex phenomenon you have to make lots of assumptions about materials’ parameters, and you have to simplify the system and equations you use for modelling the real world. You need a way of cross-checking if your results sound plausible in terms of orders of magnitude. So my goal has been to find yet another method to confirm assumptions I have made about the thermal properties of ground elsewhere.
Before I am going to revisit heat transport, I’ll try to explain what dimensional analysis is – using the best example I’ve ever seen. I borrow it from theoretical physicist – and awesome lecturer – David Tong:
How does the speed of a rowing boat depend in the number of rowers?
References: Tong’s lecture called Dynamics and Relativity (Chapter 3), This is the original paper from 1971 Tong quotes: Rowing: A similarity analysis.
The boat experiences a force of friction in water. As for a car impeded by the friction of the surrounding air, the force of friction depends on velocity.
Force is the change of momentum, momentum is proportional to mass times velocity. Every small ‘parcel’ of water carries a momentum proportional to speed – so force should at least be proportional to one factor of v. But these parcel move at a speed v, so the faster they move the more momentum is exchanged with the boat; so there has to be a second factor of v, and force is proportional to the square of the speed of the boat.
The larger the cross-section of the submerged part of the boat, A, the higher is the number of collisions between parcels of water and the boat, so putting it together:
$F \sim v^{2}A$
Rowers need to put in power to compensate for friction. Power is energy per time, and Energy is force times distance. Since distance over time is velocity, thus power is also force times velocity.
So there is one more factor of v to be included in power:
$P \sim v^{3}A$
For the same reason wind power harvested by wind turbines is proportional to the third power of wind speed.
A boat does not sink because downward gravity and upward buoyancy just compensate each other; buoyancy is the weight of the volume of water displaced. The heavier the load, the more water needs to be displaced. The submerged volume of the boat V is proportional to the weight of the rowers, and thus to their number N if the mass of the boat itself is negligible:
$V \sim N$
The volume of something scales with the third power of its linear dimensions – think of a cube or a sphere; so the surface area scales with the square of the length, and the cross-section A scales with V – and thus with N:
$A \sim N^{\frac{2}{3}}$
Each rower contributes the same share to the total rowing power, so:
$P \sim N$
Inserting for A in the first expression for P:
$P \sim v^{3} N^{\frac{2}{3}}$
Eliminating P as it has been shown to be proportional to N:
$N \sim v^{3} N^{\frac{2}{3}}$
$v^{3} \sim N^{\frac{1}{3}}$
$v \sim N^{\frac{1}{9}}$
… which is in good agreement with measurements according to Tong.
Heat Transport and Characteristic Lengths
In the last post I’ve calculated characteristic lengths, describing how heat is slowly dissipated in ground: 1) The wavelength of the damped oscillation and 2) the run-out length of the enveloping exponential function.
Both are proportional to the square root of a simple number:
$l \sim \sqrt{D \tau}$
… the factor of proportionality being ‘small’ on a logarithmic scale, like π or 2 or their inverse. τ is the period, and D was a number expressing how well the material carries away heat energy.
There is another ‘simple’ scenario that also results in a length scale described by
$\sqrt{D \tau}$ times a small number: If you deposit a confined ‘lump of heat’, a ‘point heat’ it will peter out and the average width of the lump after some time τ is about this length as well.
Using very short laser pulse to heat solid material is very close to depositing ‘point heat’. Decades ago I worked with pulsed excimer lasers, used for ablation (‘shooting off) material from ceramic targets.This type of lasers is used in eye surgery today:
Heat is deposited in nanosecond pulses, and the run-out length of the heat peak in the material is about $\sqrt{D \tau}$ with tau being equal to the very short laser’s pulse length of several nanoseconds. As the pulse duration is short, the penetration depth is short as well, and tissue is ‘cut’ precisely without heating much of the underlying material.
So this type of $\sqrt{D \tau}$ length is not just a result of a calculation for a specific scenario, but it rather seems to encompass important characteristics of heat conduction as such.
The unit of D is area over time, m2/s. If you accept the heat equation as a starting point, analysing the dimensions involved by counting x and t you see that D has to contain two powers of x and one of t. Half of applied physics and engineering is about getting units right.
But I pretend I don’t even know the heat equation and ‘visualize’ heat transport in this way: ‘Something’ – like heat energy – is concentrated in space and closely peters out. The spreading out is faster, the more concentrated it is. A thin needle-like peak quickly becomes a rounded hill, and then is flattened gradually. Concentration in space means curvature. The smaller the space occupied by the lump of heat is, the smaller its radius, the higher its curvature as curvature is the inverse of the radius of a tangential circular path.
I want to relate curvature to the change with time. Change in time has to be measured in units including the inverse of time, curvature is the inverse of space. Equating those, you have to come with something including the square of spatial dimension and one temporal dimension – something like D [m2/s].
How to get a characteristic length from this? D has to be multiplied by a characteristic time, and then we need to take a the square root. So we need to put in some characteristic time, that’s a property of the specific system investigated and not of the equation – like the yearly period or the laser pulse. And the resulting length is exactly that $l \sim \sqrt{D \tau}$ that shows up in any of of the solutions for specific scenarios.
_________________________________
The characteristic width of the spreading lump of heat is visible in the so-called Green’s functions. These functions described a system’s response to a ‘source’ which resemble a needle-like peak in time. In this case it is a Gaussian functions with a ‘width’ $\sim \sqrt{D \tau}$. See e.g. equation (14) on PDF-page 14 of these lecture notes. | 2017-02-23 23:02:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7567615509033203, "perplexity": 736.9868228455633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171251.27/warc/CC-MAIN-20170219104611-00612-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/281758/different-energies-in-finite-potential-well | # Different energies in finite potential well
When solving the Schrodinger equation in case of finite potential well, we get the following equations after separation: $$(1)-\frac{\hbar^2}{2 m} \frac{d^2 \psi_1}{d x^2} = ( E_1 - V_o) \psi_1$$ $$(2)-\frac{\hbar^2}{2 m} \frac{d^2 \psi_2}{d x^2} = E_2 \psi_2$$ $$(3)-\frac{\hbar^2}{2 m} \frac{d^2 \psi_3}{d x^2} = ( E_3 - V_o) \psi_3$$
I was wondering why all of $E_1$, $E_2$, $E_3$ should be equal. Using the argument for energy, I get that they should. But mathematically they pose no problem whatsoever as solutions to the equation. Moreover, if I use the fact that the second order derivative of $\psi$ should exist (which I haven't seen anyone else doing), I get the additional relation that $E_1 = E_3$ and $V = E_1 - E_2$. I know this looks absurd when viewed in terms of energy, but why isn't this actually valid?
Edit:
This is what I'm saying should be done:
$\psi = \begin{cases} \psi_1, & \mbox{if }x<0\mbox{ (the region outside the box)} \\ \psi_2, & \mbox{if }0<x<L\mbox{ (the region inside the box)} \\ \psi_3 & \mbox{if }x>L\mbox{ (the region outside the box)} \end{cases}$
where $\psi_1 = Ae^{\alpha x}$, $\psi_2 = Csin(kx) + Acos(kx)$, $\psi_3 = Fe^{- \alpha x}$, along with some relation between $A$, $C$, $D$, and $F$ after continuity of $\psi$ and $\frac{\partial \psi}{\partial x}$.
Now if we impose existence of $\frac{\partial^2 \psi}{\partial x^2}$, for the first and second regions we have $LHD( \frac{\partial \psi}{\partial x} ) = RHD( \frac{\partial \psi}{\partial x} )$ (left and right hand derivative). Now substituting values from the Schrodinger equation, we have
$( E_1 - V_o) \psi_1 \mid_{x=0} = E_2 \psi_2 \mid_{x=0} \implies E_1 - V_o = E_2$ (since $\psi_1 \mid_{x=0} = \psi_2 \mid_{x=0}$). So we get $E_1 - E_2 = V_o$. Similarly from second and third region we get $E_3 - E_2 = V_o$ and hence $E_1=E_2$.
• But you know those three equations work in different areas rigth? $\phi_1$ is equal to $\phi_2$ only at the boundary, and even there their second derivative is not the same. – Victor Sep 22 '16 at 16:43
• What are you getting at? If you mean that both $\psi_1$ and its derivative should be equal to the corresponding values for $\psi_2$, then I know that. I'm asking why isn't the condition for existence of second derivative imposed on the wavefunction here. – Akshit Sep 22 '16 at 16:47
• But the second derivative exist, if you look af the solution it is clearly there, even at boundaries, where it decreases as $e^{-x}$ outside the box. Are you asking why can not we say that the second derivative is not the same in the boundary, same as we do with first the first derivative? I am sorry I don't understand what you mean by "condition for existence of second derivative" could you post how to derive $V=E_1+E_3$ – Victor Sep 22 '16 at 17:08
• Sorry, I had another typo. It was supposed to be $E_1-E_2$ instead of $E_1+E_2$ – Akshit Sep 22 '16 at 18:12
• You can find the full solution for the finite walled box here: hyperphysics.phy-astr.gsu.edu/hbase/quantum/pfbox.html#c1 (note: this site is a bit temperamental, if it doesn't load immediately try again a little later). – Gert Sep 22 '16 at 18:45 | 2019-08-24 11:12:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.919969379901886, "perplexity": 172.4269574952227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027320734.85/warc/CC-MAIN-20190824105853-20190824131853-00384.warc.gz"} |
https://www.xarin.com/cylinder-pressure-analysis/estimated-end-of-combustion.html | Estimated End of Combustion
Home | Support | Cylinder Pressure Analysis | Estimated End Of Combustion
The estimated end of combustion (EEOC) is required for determining the normalising value for mass fraction burned and for heat release analysis. There have been several methods suggested by researchers, but the most common is to determine the crank angle that provides a maximum value of equation 1.
x=pV^1.15 (Equation 1)
In order to reduce the effects of signal noise, the method is modified slightly to determine the crank angle that provides a maximum over a five-point summation of equation 1:
x=sum_(i=theta-2)^(i=theta+2)p_iV_i^1.15 (Equation 2)
In order to ensure the end of combustion is not underestimated, ten degrees is added to the crank angle at which x reaches a maximum.
catool Implementation: See Return_EEOC() in analysis.c
References:
1. Brunt, M. and Emtage, A., "Evaluation of Burn Rate Routines and Analysis Errors," SAE Technical Paper 970037, 1997 | 2020-01-28 14:12:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6017544269561768, "perplexity": 2390.115170275806}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251778272.69/warc/CC-MAIN-20200128122813-20200128152813-00240.warc.gz"} |
http://math.stackexchange.com/questions/11217/simplification-biggl-frac-1x21-x2-biggr2-frac11-y2 | # Simplification : $\biggl(\frac{ 1+x^2}{1-x^2}\biggr)^2 = \frac{1}{1-y^2}$
I am trying to simplify this expression by as usual the expansion way,
$$\biggl(\frac{ 1+x^2}{1-x^2}\biggr)^2 = \frac{1}{1-y^2}$$
After some steps I am getting:
$$4x^2 - y^2 - 2x^2y^2 - x^4y^2 = 0$$
The answer suggested in my module is $x^2y = 2x - y$
For the answer to be correct I think what I should get is
$$4x^2 - y^2 - 4xy - x^4y^2 = 0$$
What exactly I am doing wrong ? I tried to find an error in my solution, but unable to spot any(yet).
EDIT: For reference I am adding the other options mentioned the question (and now the question too):
if $4\biggl[\frac{x^2}{1} + \frac{x^{6}}{3}+ \frac{x^{10}}{5} + \cdots \biggr] = y^2 + \frac{y^4}{2} + \frac{y^6}{3} + \cdots$, then
$$x^2y = 2x+y \text{ or } x = 2y^2 - 1 \text{ or } x^2y = 2x + y^2$$
-
What you got was correct; there's something screwy going on for that "answer" in your module to be correct. – J. M. Nov 21 '10 at 14:18
@J.M:But can we reduce the equation to it ? Also I would like to ask you can you please tell me is it possible to use mathematica for this kind of simplification ? If yes, How ? :) – Quixotic Nov 21 '10 at 14:22
Your answer and the "correct answer" are two different beasts (for graphical evidence, try using ImplicitPlot[]). As for "simplification" in Mathematica, I don't know of a "no-thinking-needed" method, but note that the functions Numerator[], Denominator[] and/or Together[] are available. – J. M. Nov 21 '10 at 14:28
@J.M: I added the actual problem, check it once, in case I have committed any other error while deriving that expression. – Quixotic Nov 21 '10 at 14:43
it wasn't me... :o I don't see why this would be downvoted. – J. M. Feb 16 '12 at 6:46
My interpretation is that you want to know the relation between $y$ and $x$ so that
$\left( \dfrac{1+x^{2}}{1-x^{2}}\right) ^{2}=\dfrac{1}{1-y^{2}}.$
My detailed computation is as follows:
$\dfrac{\left( 1+x^{2}\right) ^{2}}{\left( 1-x^{2}\right) ^{2}}= \dfrac{x^{4}+2x^{2}+1}{x^{4}-2x^{2}+1}$
$\left( \dfrac{1+x^{2}}{1-x^{2}}\right) ^{2}=\dfrac{1}{1-y^{2}}\Leftrightarrow \dfrac{x^{4}+2x^{2}+1}{x^{4}-2x^{2}+1}=\dfrac{1}{1-y^{2}}$
$\Leftrightarrow \left( x^{4}+2x^{2}+1\right) \left( 1-y^{2}\right) =x^{4}-2x^{2}+1$
Expanding
$\left( x^{4}+2x^{2}+1\right) \left( 1-y^{2}\right) =2x^{2}-y^{2}+x^{4}-2x^{2}y^{2}-x^{4}y^{2}+1$
you get
$2x^{2}-y^{2}+x^{4}-2x^{2}y^{2}-x^{4}y^{2}+1=x^{4}-2x^{2}+1$
$\Leftrightarrow 4x^{2}-y^{2}-2x^{2}y^{2}-x^{4}y^{2}=0\qquad\text{the same as in the question}$
$\Leftrightarrow (1+2x^{2}+x^{4})y^{2}=4x^{2}$
$\Leftrightarrow (1+x^{2})^{2}y^{2}=4x^{2}$
$\Leftrightarrow (1+x^{2})y=\pm 2x$
Taking the positive root, we have
$y+x^{2}y=2x$
and finally
$x^{2}y=2x-y$
$\Leftrightarrow 4x^{2}-y^{2}-2x^{2}y^{2}-x^{4}y^{2}=0\qquad\text{the same as in the question}$
$\Leftrightarrow (1+2x^{2}+x^{4})y^{2}=4x^{2}$
$\Leftrightarrow (1+x^{2})^{2}y^{2}=4x^{2}$
Taking the negative root gives
$(1+x^{2})y=-2x$
$\Leftrightarrow y+x^{2}y=-2x$
and finally
$x^{2}y=-2x-y$
-
+1 and Accepted,Very Very well explained! Thanks you very much:) – Quixotic Nov 21 '10 at 14:58
Also, I would like to ask you if you have 1 mint to solve this (from the exact problem itself) would you approach it similarly ? Since under exam I would have that much time only , or may be 1.5 mints at maximum. – Quixotic Nov 21 '10 at 15:03
Well,I don't really understand what you meant by "taking the negative root gives:" What I can see that both gives the same answer :) – Quixotic Nov 21 '10 at 15:11
In the same situation I would only go fast until the equation you wrote. After that I would have to know in what form is the answer required or select one from the given options. – Américo Tavares Nov 21 '10 at 15:12
$(1+x^{2})^{2}y^{2}=4x^{2}$ $\Leftrightarrow (1+x^{2})y=\pm 2x$ – Américo Tavares Nov 21 '10 at 15:14
HINT $\rm\quad\ 0 \ \ = \ \ (y^2-1)\ (1+x^2)^2 + (1 - x^2)^2$
$\rm\quad\quad\quad\quad\quad\quad\quad\ = \ \ y^2\:(1+x^2)^2 - 4\:x^2$
$\rm\quad\quad\quad\quad\quad\quad\quad\ =\ \ (y\:(1+x^2)-2\:x)\ \ (y\:(1+x^2)+2\:x)$
- | 2014-12-21 09:22:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8362430930137634, "perplexity": 943.4065631105949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770815.80/warc/CC-MAIN-20141217075250-00146-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://math.meta.stackexchange.com/questions/linked/2970 | 8 questions linked to/from Papers that originated on math.SE
2k views
### Has there ever been an open problem solved on Math.SE?
This question made me wonder if an open problem had ever been solved via collaboration on StackExchange.
673 views
### Finding coauthor/s for a research paper
In some of my research it is important to have knowledge of mathematics of a specific field beyond that, which I learned in my degree. In such situations it is very helpful to get support from a ...
5k views
### Citing stackexchange postings
Have any postings to stackexchange been cited in scholarly publications? If one does that, should one just name the author, the subject line, the date of posting, and the URL?
738 views
Lets say that I build up parts of a chain of reasoning leading to a publishable research discovery by posting one or more questions on http://math.stackexchange.com (or any other public Q/A-board), ...
651 views
### Theses and dissertations that originated on math.SE
In a comment to my question about published papers that originated on math.SE Asaf asks about master's theses. I think it would be interesting to have a list of those as well. So, that's what this ...
491 views
### Math SE references in thesis
I am writing my bachelor's degree thesis and have used a number of Math SE (and a Physics SE) question as references. My thesis supervisor is a bit unsure how appropriate this is. I used them mainly ...
138 views
It seems to me that my answer at If $f$ is a smooth real valued function on real line such that $f'(0)=1$ and $|f^{(n)} (x)|$ is uniformly bounded by $1$ , then $f(x)=\sin x$? deserves to become ...
92 views
### Regarding research work. [duplicate]
Suppose I have asked some question related to my research and got the answer. May I add that result and proof(obtained on this site) in my research article? | 2021-06-22 19:22:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.836176872253418, "perplexity": 1007.8426684571372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519735.70/warc/CC-MAIN-20210622190124-20210622220124-00560.warc.gz"} |
https://bennycheung.github.io/adventures-in-deep-reinforcement-learning | The paradigm of learning by trial-and-error, exclusively from rewards is known as Reinforcement Learning (RL). The essence of RL is learning through interaction, mimicking the human way of learning with an interaction with environment and has its roots in behaviourist psychology. The positive rewards will reinforce the behaviour that leads to it.
For a definition of the reinforcement learning problem we need to define an environment in which a perception-action-learning loop takes place. In this environment, the agent observes a given state t. The agent, learning in the policy, interacts with the environment by taking an action in a given state that may have long term consequences. It goes into a next state with a given timestep t+1 and updates the policy. At the end, the agent receives observations/states from the environment and a reward as a sign of feedback, and interacts with the environment through the actions.
The reinforcement learning problem can be described formally as a Markov Decision Process (MDP): it describes an environment for reinforcement learning, the surroundings or conditions in which the agent learns or operates. The Markov process is a sequence of states with the Markov property, which claims that the future is independent of the past given the present. The sufficiency of the last state makes that we only need the last state to evaluate the agent future choices. While deep neural network requires a lot of supervised training data, and inflexible about the modelled world changes. On the other hand, reinforcement learning can handle the world changes and maximize the current selection.
Using PySC2 helps to understand the practical aspect of reinforcement learning, rather than starting with toy example, the complexity of StarCraft II game is more realistic, the AI needs to balance resources, building, exploring, strategizing and fighting. The balance of multiple objectives and long term planning in order to win, makes the game felt realistic in complexity. The current techniques are mostly focus on single agent learning. The potential is to extend into multi-agents learning that applying collaborative game theory (Do you remember the movie “A Beautiful Mind”?)
Figure. This shows the running state of the StarCraft II Learning Environment. The top left shows the actual StarCraft II game running. The SC2LE captures and reports the observations from the game environment. SC2LE allows a visual display of all the game observations on the right. The AI agent can take the observations and evaluates the optimal actions.
Games are ideal environments for reinforcement learning research. RL problems on real-time strategy (RTS) games are far more difficult than problems on Go due to complexity of states, diversity of actions, and long time horizon. The following is my practical research notes that capture this learning and doing process. This article is intended to provide a concise experimental roadmap to follow. Each section starts with a list of reference resources and then follows with what can be tried. Some information is excrept from the original sources for reader convenience, in particular being able to learn how to setup and running the experiements. As always, ability to use Python is fundamental to the adventures.
## PySC2 Installation
PySC2 is DeepMind’s Python component of the StarCraft II Learning Environment (SC2LE). It exposes Blizzard Entertainment’s StarCraft II Machine Learning API as a Python RL Environment. This is collaboration between DeepMind and Blizzard to develop StarCraft II into a rich environment for RL research. PySC2 provides an interface for RL agents to interact with StarCraft II, getting observations and sending actions.
Install by,
conda create -n pysc2 python=3.5 anaconda
conda activate pysc2
pip install pysc2==1.2
You can run an agent to test the environment. The UI shows you the actions of the agent and is helpful for debugging and visualization purposes.
python -m pysc2.bin.agent --map Simple64
There is a human agent interface that is mainly used for debugging, but it can also be used to play the game. The UI is fairly simple and incomplete, but it’s enough to understand the basics of the game. Also, it runs on Linux.
python -m pysc2.bin.play --map Simple64
Running an agent and playing as a human save a replay by default. You can watch that replay by running:
python -m pysc2.bin.play --replay <path-to-replay>
This works for any replay as long as the map can be found by the game. The same controls work as for playing the game, so F4 to exit, pgup/pgdn to control the speed, etc.
You can save a video of the replay with the --video flag
## PySC2 Deep RL Agents
This repository implements a Advantage Actor-Critic agent baseline for the pysc2 environment as described in the DeepMind paper StarCraft II: A New Challenge for Reinforcement Learning. It uses a synchronous variant of A3C (A2C) to effectively train on GPUs and otherwise stay as close as possible to the agent described in the paper.
Progress that confirmed by the project
• (/) A2C agent
• (/) FullyConv architecture
• (/) support all spatial screen and minimap observations as well as non-spatial player observations
• (/) support the full action space as described in the DeepMind paper (predicting all arguments independently)
• (/) support training on all mini games
Unfortunately, the project stopped before achieving the following objectives.
• (x) report results for all mini games
• (x) LSTM architecture
• (x) Multi-GPU training
### Quick Install Guide
conda create -n pysc2 python=3.5 anaconda
conda activate pysc2
pip install numpy
pip install tensorflow-gpu==1.4.0 --ignore-installed
pip install pysc2==1.2
Install StarCraft II. On Linux, use 3.16.1.
When you extract the zip files, you need to enter iagreetotheeula to accept the EULA.
### Train & run
There are few more requirements to note.
• It requires cuda 8.0, cudnn 6.0 (tested on Linux 16.04 LTS, Titan-X 12 GB)
• Modify the following file pysc2-rl-agents/rl/agents/a2c/agent.py line 202, from keepdims=True to keep_dims=True
test with:
python run.py my_experiment --map MoveToBeacon --envs 1 --vis
run and train (the default spawning 32 environments are too many, reduced that to 16):
python run.py my_experiment --map MoveToBeacon --envs 16
run and evalutate without training:
python run.py my_experiment --map MoveToBeacon --eval
You can visualize the agents during training or evaluation with the --vis flag. See run.py for all arguments.
Summaries are written to out/summary/<experiment_name> and model checkpoints are written to out/models/<experiment_name>.
After an hour of training on the MoveToBeacon mini-game, approx. 8K episodes, the agent can almost track the beacon optimally. (train on GPU TitanX Pascal (12GB))
In the following shows the plot for the score over episodes.
## Understanding PySC2 Deep RL
Minigames come as controlled environments that might be useful to exploit game features in StarCraft II. General purpose learning system for StarCraft II can be a daunting task. So there is a logical option in splitting this tasks into minitask in order to advance in research. Mini-games focus on different elements of StarCraft II Gameplay .
To investigate elements of the game in isolation, and to provide further fine-grained steps towards playing the full game, Deepmind has built several mini-games. These are focused scenarios on small maps that have been constructed with the purpose of testing a subset of actions and/or game mechanics with a clear reward structure. Unlike the full game where the reward is just win/lose/tie, the reward structure for mini-games can reward particular behaviours (as defined in a corresponding .SC2Map file).
### Agents
Regarding scripted agent, there is a python file with several developments. scripted_agent.py is focused on HallucinIce map in which makes Archon Hallucination. Besides there is another class that put all hallucination actions on a list and the agent chooses randomly in between those actions.
Q-Learning and DQN agents are provided for HallucinIce minigame with the new PySC2 release
### How to run mini-games in your environment
Place the .SC2 files into /Applications/StarCraft II/Maps/mini_games/ -sometimes the Map folder might not exist. If so, please create it-
Go to pysc2/maps/mini_games.py and add to mini-games array the following mini-games names
mini_games = [ ## This mini-games names should alredy been in your list
"BuildMarines", # 900s
"CollectMineralsAndGas", # 420s
"CollectMineralShards", # 120s
"DefeatRoaches", # 120s
"DefeatZerglingsAndBanelings", # 120s
"FindAndDefeatZerglings", # 180s
"MoveToBeacon", # 120s ##Now you add this few lines
"SentryDefense", # 120s
"ForceField", # 30s
"HallucinIce", # 30s
"FlowerFields", # 60s
"TheRightPath", # 300s
"RedWaves", # 180s
"BlueMoon", # 60s
"MicroPrism", # 45s
]
We can copy the sample from Starcraft_pysc2_minigames/Agents/scripted_agent.py to pysc2/agents/scripted_agent_test.py. Subsequently, we can test the scripted sample bot agents with the new mini-games. You should see something like the following.
Figure. excrept from Gema Parreño’s blog, StarCraft II Learning environment - running the Hallucination Archon scripted agent
### Installation
We need to install the required packages from requirements.txt; however, comment out the package as following:
# PySC2==2.0
numpy==1.14.0
Keras==2.2.2
Keras-Applications==1.0.4
# keras-contrib==2.0.8
Keras-Preprocessing==1.0.2
keras-rl==0.4.2
pandas==0.22.0
Install keras-contrib from source by,
cd keras-contrib
python setup.py install
After doing the previous mini_games.py modification, you can install the pysc2 manually from source
cd pysc2
python setup.py install
You can test the installation with the added mini-games by
python -m pysc2.bin.agent --map HallucinIce
You can test the new mini-games bot agents from scripted_agent_test.py by
python -m pysc2.bin.agent --map HallucinIce --agent pysc2.agents.scripted_agent_test.HallucinationArchon
## AlphaStar the Next Level?
While learners focusing on using the SC2LE to understand smaller scale minigame reinforcement learning, the DeepMind AlphaStar Team has successfully scale-up to train an AI to defeat a top professional StarCraft II player. In a series of test matches held on December 2018, AlphaStar decisively beat Team Liquid’s Grzegorz “MaNa” Komincz, one of the world’s strongest professional StarCraft players, 5-0.
Watch the “AlphaStar: The inside story”
AlphaStar uses a novel multi-agent learning algorithm. The neural network was initially trained by supervised learning from anonymised human games released by Blizzard. This allowed AlphaStar to learn, by imitation, the basic micro and macro-strategies used by players on the StarCraft ladder.
Figure. excrept from the AlphaStar Team blog post showing how the multi-agent reinforcement learning process is created.
Subsequently, these were then used to seed a multi-agent reinforcement learning process. A continuous league was created, with the agents of the league - competitors (AI) - playing games against each other, akin to how humans experience the game of StarCraft by playing on the StarCraft ladder. New competitors were dynamically added to the league, by branching from existing competitors; each agent then learns from games against other competitors. This new form of training takes the ideas of population-based and multi-agent reinforcement learning further, creating a process that continually explores the huge strategic space of StarCraft gameplay, while ensuring that each competitor performs well against the strongest strategies, and does not forget how to defeat earlier ones.
Looks like the new level of AI using deep reinforcement learning is promising!
## More References
• David Silver, Reinforcement Learning, a series of 10 youtube video lectures https://www.youtube.com/watch?v=2pWv7GOvuf0&list=PLzuuYNsE1EZAXYR4FJ75jcJseBmo4KQ9-
• This is valuable if you are new to RL and want to understand the mathematical and philosophical background to Reinforcement Learning.
• Richard Sutton and Andrew Barto, Reinforcement Learning: An Introduction, MIT Press, 2017, ISBN:9780262193986
• This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field’s intellectual foundations to the most recent developments and applications.
• Rowel Atienza, Advanced Deep Learning with Keras, Packt Publishing, 2018, ISBN:978788629416
• Chapter 9: Deep Reinforcement Learning | 2022-12-07 03:10:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3981930613517761, "perplexity": 4518.67461304816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711126.30/warc/CC-MAIN-20221207021130-20221207051130-00261.warc.gz"} |
http://www.brokencontrollers.com/article/20923884.shtml | ## Find equidistant points between two coordinates
Equidistant Formula - Equidistant means equal distance from every point. To find equidistant distance for any two end points, we have to use both mid point formula
Find equidistant points between two coordinates - With more investigation you can see that when t is between 0 and 1, (x,y) is on the connecting line segment, and to get a point a certain fraction of the distance
Find the Locus of Points Equidistant from Two Points - By Mark Ryan. If you're given two points, and you're asked to find the locus of points equidistant from these two points, you'll always find the same thing: that the
geometry - Finding perpendicular bisector of the line segement joining (−1 . and it results in coordinates of intersection points of circumcircle centers.
Equidistant: Definition & Formula - An equidistant point is a point that is an equal distance from two other points. Please note, an equidistant In order to find a point that is equidistant from other points, we can use the midpoint formula, once we know the x and y coordinates. ×
Equation of a line equidistant from 2 points - Find the point on the y - axis that is equidistant from (-4, -2) and (3, 1). if (a, b) and (c, d) are points in the plane then the distance between them is For the second problem, what do you know about the coordinates of a point on the y-axis ?
Find Point Equidistant From Two Points A1 - Given 2 points, we will find the equation of the line equidistant between Equation of
Coordinate Geometry | Distance Formula - Application of distance formula. Find Point Equidistant From Two Points A1. Anil Kumar
Lines and equidistant points - Coordinate Geometry - We will use distance formula to find coordinates of a point on x
If (x, 4) is equidistant from (5, -2) and (3, 4), find x. - Finding a point on a line equidistant from other points.
## find the locus of a point equidistant from the lines
The Locus of a point equidistant from a point and a line – GeoGebra - The Locus of a point equidistant from a point and a line. The point P(x, y) moves such that it remains equidistant from a point and a line. The distance from a point to a line is always the perpendicular distance.
Find the locus of a point equidistant from two lines y=sqrt3x and y=1 - Locus is given by pair of lines given by x2−y2=0 i.e. x+y=0 and x−y=0
Find the Locus of Points Equidistant from Two Points - To find the locus of all points equidistant from two given points, follow these steps : You got it—it's a vertical line that goes through the midpoint of the segment
find the locus of a point equidistant from the lines x+y+4=0 and 7x +y - Please find the solved answer for this. I'm also giving you a part of the solution. Squaring we get,. 25(h+k+7)² = (7h+k+20)².
Locus of a Point (solutions, examples, videos) - Construct the locus of point P moving equidistant from fixed points X and Y and A point P moves so that it is always equidistant from two intersecting lines AB
VECTORS: Find locus of points equal distance from two lines - However, when trying to find the set of (x,y) that are equidistant from L1,L2, you can't just use the same x,y for the point and the corresponding
Find the equation to the locus of a point equidistant from the points - Let the point equidistant from both these points be $(x,y)$ Therefore, [ math]\sqrt{(x-2)^2+(y-4)^2} The term cancel out, we get: The locus is the straight line parallel to y-axis crossing at . The graph below shows the
Locus of a point equidistant from two points - Video on Oblique angle Bisector for Pair of Lines: https://www.youtube.com/watch ?v
analytic geometry - This is the Solution of Question From RD SHARMA book of CLASS 11 CHAPTER STRAIGHT
What is the locus of a point equidistant from point (2,4) and (0,4 - Locus of a point equidistant from two points. Finding the locus of a point such that sum of
## how to find equidistant of three points
Centers of a Triangle - In other words, it is the point that is equidistant from all three vertices. The circumcenter is constructed in the following way. Again, find the midpoints of the sides of the triangle. Next, construct the perpendicular line to the side that passes through the midpoint of each side.
Finding Equidistant Points - How do you find a point that is equidistant from three other points?
Point Equidistant from 3 Other Points - Math Forum - We have to line up 3 pins on a lifting bridle to be 120 degrees apart from each other to connect into How does one find these points without using a protractor ?
3 equidistant points on a circle- Math Central - Determine the point that is equidistant from the points A(-1,7), B(6,6) I agree that the distance from (2,3) to each of the 3 points is 5 (they all
Determining the point that is equidistant from three other points - Find the coordinates of the point equidistant from three given points A (5 , 1 ) , B (- 3,-7 ) and C( 7, -1).
Find the coordinates of the point equidistant from three given points A - Question 442382: Find coordinates for the point equidistant from (2,1) (2,-4) (-3,1) Please i really need your help ! thankyou. Found 3 solutions by MathLover1,
SOLUTION: Find coordinates for the point equidistant from (2,1) (2,-4 - The point in a plane equidistant from 3 non colinear* points is called the circumcircle. Here's a solution using the distance formula. Since our
How to find a point which is equidistant from three other points - Points equidistant from A, B and C lie along a line. This line is the intersection of various planes that bisect the line segments joining pairs of
geometry - The circumcenter of a triangle is the point that is at an equidistance from the vertices of the triangle. In the following The three medians of a triangle meet in the centroid. The centroid is Find the measure of the angles. ∠EBFand∠FCB.
More about triangles (Geometry, Triangles) – Mathplanet - Find a point that is equidistant from three other points using a ruler and a protractor.
## what is the set of all points that are equidistant from two points
Find the Locus of Points Equidistant from Two Points - (1) The set of all points in a plane that are a given distance from a point in the plane. (2) The set of all points in a plane that are equidistant from two points in the
The Set of All Points that – GeoGebra - (688,#27) Find an equation of the set of all points equidistant from the points A(-1, 5,3) and B(6,2,-2). Solution: We need a set of points P where . Distance formula
(688,#27) Find an equation of the set of all points equidistant from - Let the parametric point be (x,y) on the plane which is equidistant from the given points (−9,3,3) & (6,−2,4) hence, we have
calculus - In Mathematics we often say "the set of all points that ". Example: An ellipse is the locus of points whose distance from two fixed points add up to a constant.
Set of All Points - If you're given two points, and you're asked to find the locus of points equidistant from these two points, you'll always find the same thing: that the locus of points is actually the perpendicular bisector of the segment that joins the two points.
[Geometry] The set of all points equidistant from two points in R - The set of all points equidistant to two points A(x_a, y_a), B(x_b, y_b) in R^2 is the line given by the line: 2(x_b − x_a)x + 2(y_b − y_a)y
The set of all points of a plane which are equidistant from a - A geometric figure.” It depends on the metric. If done in taxi cab metric, it produces a square. If done “normally” it's a circle. So what you wrote is
Set of Points Equidistant from Two Points in Taxicab Geometry - In taxicab geometry the usual Euclidean distance between points is replaced by the sum of the absolute differences of their coordinates In
Equidistant points - An alternate definition of a line is the "the set of all points equidistant from two given points". This line is known as the locus of the point P. In the figure above
Equation of a line equidistant from 2 points - Given 2 points, we will find the equation of the line equidistant between them.
## how to find a point equidistant from two lines
geometry - Assuming such point Q exists it must lies on the Bisector Line b of P1 and P2 i.e. the line through the midpoint of P1 and P2 and orthogonal to
Locus from two lines - FIND THE LOCUS OF POINTS EQUIDISTANT FROM TWO POINTS. Identify a pattern. The figure shows the two given points, A and B, along with four new points that are each equidistant from the given points. Look outside the pattern. You come up empty in Step 2. Look inside the pattern. Nothing noteworthy here, either. Draw the
VECTORS: Find locus of points equal distance from two lines - Locus is given by pair of lines given by x2−y2=0 i.e. x+y=0 and x−y=0 All points on lines bisecting are equidistant from the two given lines.
Find the Locus of Points Equidistant from Two Points - For example, consider the line segment containing the end points A and B To find equidistant distance for any two end points, we have to use
Find the locus of a point equidistant from two lines y=sqrt3x and y=1 - A point P moves so that it is always equidistant from two intersecting lines AB and Given the line AB and the point Q, find one or more points that are 3 cm from
Equidistant Formula - Let the two given lines be ax + by + c = 0 and dx + ey + f = 0. Let (x', y') be any arbitrary point on the line equidistant from the two given lines.
Locus of a Point (solutions, examples, videos) - Line A is parallel to Line B, and Line C is parallel to Line D. You're looking for a point that's at the same distance to both A and B, while at the
What is the equation of a line equidistant from two other lines - P is a point which has an equal distance from the two intersecting lines. - The locus of P is a pair of angle bisectors of the angles formed by the two intersecting
How to find a point equidistant from two different pairs of - The locus of points that are equidistant from two intersecting lines.
Locus (equidistant from two intersecting lines) – GeoGebra - Video on Oblique angle Bisector for Pair of Lines: https://www.youtube.com/watch ?v | 2019-11-17 23:05:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6679463386535645, "perplexity": 231.26989943247014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669352.5/warc/CC-MAIN-20191117215823-20191118003823-00419.warc.gz"} |
http://nylogic.org/talks/superstrong-cardinals-are-never-laver-indestructible-and-neither-are-extendible-almost-huge-and-rank-into-rank-cardinals | # Superstrong cardinals are never Laver indestructible, and neither are extendible, almost huge and rank-into-rank cardinals
Set theory seminarFriday, February 1, 201312:00 amGC 5383New location
# Superstrong cardinals are never Laver indestructible, and neither are extendible, almost huge and rank-into-rank cardinals
### The City University of New York
Although the large cardinal indestructibility phenomenon, initiated with Laver’s seminal 1978 result that any supercompact cardinal $kappa$ can be made indestructible by $ltkappa$-directed closed forcing and continued with the Gitik-Shelah treatment of strong cardinals, is by now nearly pervasive in set theory, nevertheless I shall show that no superstrong cardinal—and hence also no $1$-extendible cardinal, no almost huge cardinal and no rank-into-rank cardinal—can be made indestructible, even by comparatively mild forcing: all such cardinals $kappa$ are destroyed by $Add(kappa,1)$, by $Add(kappa,kappa^+)$, by $Add(kappa^+,1)$ and by many other commonly considered forcing notions.
This is very recent joint work with Konstantinos Tsaprounis and Joan Bagaria.
Professor Hamkins (Ph.D. 1994 UC Berkeley) conducts research in mathematical and philosophical logic, particularly set theory, with a focus on the mathematics and philosophy of the infinite. He has been particularly interested in the interaction of forcing and large cardinals, two central themes of contemporary set-theoretic research. He has worked in the theory of infinitary computability, introducing (with A. Lewis and J. Kidder) the theory of infinite time Turing machines, as well as in the theory of infinitary utilitarianism and, more recently, infinite chess. His work on the automorphism tower problem lies at the intersection of group theory and set theory. Recently, he has been preoccupied with various mathematical and philosophical issues surrounding the set-theoretic multiverse, engaging with the emerging debate on pluralism in the philosophy of set theory, as well as the mathematical questions to which they lead, such as in his work on the modal logic of forcing and set-theoretic geology.
Posted by on January 22nd, 2013
This entry was posted on . | 2020-05-26 02:12:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7820079922676086, "perplexity": 1983.0369541606879}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390442.29/warc/CC-MAIN-20200526015239-20200526045239-00076.warc.gz"} |
https://mathematica.stackexchange.com/questions/98664/herons-method-of-square-root-calculation-issue-with-previous-suggestion | # Heron's Method of Square Root Calculation Issue with Previous Suggestion [closed]
Being interested in limit points, which always seem just a little out of reach for me, I recently came across a previous question and answers concerning Heron's (Babylonian) method for calculating square roots.
Two solutions were put forward (the first slightly modified here to look at output with increased precision and more iterations).
Being curious as to just how the method worked on numbers other than integers, I asked both to compute the square root of 27.5625 (ie 5.25^2).
The first simply uses the Mean Function to calculate the next estimated value.
heronSqrt1[x_, n_: 10] :=
Module[{f}, f[num_, est_] := SetPrecision[N@Mean[{est, num/est}], 20];
NestList[f[x, #] &, n/3., n]]
heronSqrt1[27.5265]
using Ver. 10.3 for this method the following solution is computed:
{3.33333, 5.7956416666666665805, 5.2725794522380562412, \
5.2466344586498774305, 5.2465703086983612735, 5.2465703083061789869, \
5.2465703083061789869, 5.2465703083061789869, 5.2465703083061789869, \
5.2465703083061789869, 5.2465703083061789869}
A second approach making use of the FixedPoint Function also given in was:
heronSqrt2[x_ /; Element[x, Reals] && x >= 0] :=
FixedPoint[(# + x/#)/2. &, x/3.]
and when executed
heronSqrt2[27.5625]
gives:
the correct answer (Sqrt[27.5625]) of 5.25.
Can anyone explain why the first suggestion (heronSqrt1) fails to converge on the correct answer, yet the second (heronSqrt2) succeeds with respect to the accuracy of the result? It seems apparent, but mystifying to me, why the first converges to the same number that is very close to the correct answer, within 5 iterations, but nonetheless converges to the wrong number. It is as if precision is somehow being lost between the two statements in heronSqrt1.
In looking at the logic they seem to be the same to me, even after increasing the precision to determine if HeronSqrt1 simply was off due to truncation error.
Apologies for asking as a separate question, but I still don't have enough points to make a comment on the thread of the previous question, where it might have been more appropriately placed.
• I believe that all calculations with your heronSqrt2 and heronSqrt1 will revert to machine precision, because one of the numbers used is the machine precision number 3.0. – murray Nov 4 '15 at 23:35
• Did you know that $27.5265 \ne 27.5625$? – Rahul Nov 5 '15 at 0:52
• Thanks for catching that, I should have cut and pasted rather than take a more dyslexic approach. – Stuart Poss Nov 5 '15 at 5:36 | 2020-02-17 17:09:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6081022620201111, "perplexity": 1404.1700239417519}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875142603.80/warc/CC-MAIN-20200217145609-20200217175609-00167.warc.gz"} |
https://socratic.org/questions/what-is-the-relative-shape-and-volume-of-an-aluminum-cylinder-at-stp | # What is the relative shape and volume of an aluminum cylinder at STP?
$\text{Aluminum metal}$ is a solid at $\text{STP}$ so........
$\text{Aluminum metal}$ is a solid at $\text{STP}$ so you will have to quote the mass and the shape before we can determine the volume. | 2019-11-22 20:53:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42388108372688293, "perplexity": 406.4301344299885}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671548.98/warc/CC-MAIN-20191122194802-20191122223802-00306.warc.gz"} |
https://cstheory.stackexchange.com/questions/40765/to-what-extent-supervised-learning-erm-learn-first-order-knowledge | # To what extent supervised learning ERM learn first-order knowledge
Suppose I have a collection of (hidden) first-order rules: $$\mathcal{R}: \{ Q_i(x) => P_i(x) \}_{i=1}^{k}$$ all defined over $x \in \mathcal{X}$.
I can use these rules and (automatically) generate a large collection of (training) data for my supervised system: $\mathcal{D}: \{(x_i, y_i)\}_{i=1}^{n}$, say for $y_i \in \{-1, +1\}$, and run a supervised system on this sampled data, and test on a heldout set.
If my rules are compatible (not contradictory) a rule-based system should be able to get a perfect score on the sampled set. However, I am not sure how would a supervised do on this.
Are there any possibility/impossibility on the ability of supervised systems for learning first-order rules (possibly with some assumptions) and based on finite samples?
This is basically the reverse of rule-learning, in which the goal is to learn some rules $\mathcal{R}$, given a training data $\mathcal{D}$.
I did a little bit of Googling but didn't get anything directly relevant (all I found was algorithms for first-order rule induction or training supervised systems that use 1st order rules as features). That said, it's possible that I am missing some results on this. Would appreciate any thoughts on this.
• What do you mean by "first-order" rules? If you are referring to first-order logic, then it sounds like $Q_i$ and $P_i$ can be arbitrary first-order formulas (involving $\forall$, $\exists$ and further nested $\Rightarrow$), in which case you umight as well just say that all your formulas are of the form $R_i(x)$ for an arbitrary formula. Do you mean Horn clauses, by any chance? – Andrej Bauer May 10 '18 at 6:14
• How does $y_i$ relate to $x_i$ and to the rules? Is the idea that $y_i=1$ iff $x_i$ satisfies all the rules? In other words, how do you use these rules to generate the training samples $(x_i,y_i)$? – D.W. May 10 '18 at 6:27
• Your formulation is equivalent to saying that the ruleset is $\mathcal{R} : \{R_i(x)\}_{i=1}^k$. Here you can define the predicate $R_i(x)$ to be equivalent to $Q_i(x) \implies P_i(x)$. And since you can identify any predicate $R_i(x)$ with a set $S_i$, such that $R_i(x)$ is true iff $x \in S_i$, your ruleset becomes equivalent to saying that you have a single rule that $x \in S$, where $S= S_1 \cap \cdots \cap S_k$. So you are asking how well supervised learning can learn membership in an arbitrary set. By the no free lunch theorem, it's impossible without a prior on $S$. – D.W. May 10 '18 at 6:28
• @D.W.: is there a difference if we presume that the formulas are Horn clauses? (There's still a question of how complicated can the atomic predicates be.) – Andrej Bauer May 10 '18 at 8:46
• @AndrejBauer, Yeah, that seems like it should change the answer if the atomic predicates are simple enough. – D.W. May 10 '18 at 15:52 | 2020-07-04 16:51:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8578789830207825, "perplexity": 401.26548319509834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886178.40/warc/CC-MAIN-20200704135515-20200704165515-00051.warc.gz"} |
https://jgrapht.org/javadoc/org.jgrapht.demo/org/jgrapht/demo/WarnsdorffRuleKnightTourHeuristic.html | ## Class WarnsdorffRuleKnightTourHeuristic
• java.lang.Object
• org.jgrapht.demo.WarnsdorffRuleKnightTourHeuristic
• public class WarnsdorffRuleKnightTourHeuristic
extends java.lang.Object
Implementation of <a href = "https://en.wikipedia.org/wiki/Knight%27s_tour#Warnsdorf's_rule">Warnsdorff's rule</a> - heuristic for finding a knight's tour on chessboards. A knight's tour is a sequence of moves of a knight on a chessboard such that the knight visits every square only once. If the knight ends on a square that is one knight's move from the beginning square (so that it could tour the board again immediately, following the same path), the tour is closed, otherwise it is open. The knight's tour problem is the mathematical problem of finding a knight's tour. Description of the Warnsdorff's rule: set a start cell. Always proceed to the cell that have the fewest onward moves. In case of a tie(i.e. there exist more than one possible choice for the next cell) go to the cell with largest Euclidean distance from the center of the board. This implementation also allows you to find a structured knight's tour. Knight's tour on board of size $n \times m$ is called structured if it contains the following $8$ UNDIRECTED moves: 1). $(1, 0) \to (0, 2)$ - denoted as $1$ on the picture below. 2). $(2, 0) \to (0, 1)$ - denoted as $2$ on the picture below. 3). $(n - 3, 0) \to (n - 1, 1)$ - denoted as $3$ on the picture below. 4). $(n - 2, 0) \to (n - 1, 2)$ - denoted as $4$ on the picture below. 5). $(0, m - 3) \to (1, m - 1)$ - denoted as $5$ on the picture below. 6). $(0, m - 2) \to (2, m - 1)$ - denoted as $6$ on the picture below. 7). $(n - 3, m - 1) \to (n - 1, m - 2)$ - denoted as $7$ on the picture below. 8). $(n - 2, m - 1) \to (n - 1, m - 3)$ - denoted as $8$ on the picture below. ######################################### #*12*********************************34*# #2*************************************3# #1*************************************4# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #***************************************# #6*************************************8# #5*************************************7# #*65*********************************78*# ######################################### If you are confused with the formal definition of the structured knight's tour please refer to illustration on the page $3$ of the paper "An efficient algorithm for the Knight’s tour problem " by Ian Parberry. One more feature of this implementation is that it provides an option to return a shifted knight's tour, where all cell's coordinates are shifted by some values. Basically it is the same as knight's tour of some piece of the board.
• ### Constructor Summary
Constructors
Constructor Description
WarnsdorffRuleKnightTourHeuristic(int n)
Constructor.
WarnsdorffRuleKnightTourHeuristic(int n, int m)
Constructor.
• ### Method Summary
All Methods
Modifier and Type Method Description
org.jgrapht.demo.KnightTour getTour(org.jgrapht.demo.TourType type, boolean structured, int shiftX, int shiftY)
Generates a knight's tour that satisfies the input parameters.
• ### Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
• ### Constructor Detail
• #### WarnsdorffRuleKnightTourHeuristic
public WarnsdorffRuleKnightTourHeuristic(int n)
Constructor.
Parameters:
n - width and height of the board.
• #### WarnsdorffRuleKnightTourHeuristic
public WarnsdorffRuleKnightTourHeuristic(int n,
int m)
Constructor.
Parameters:
n - width of the board.
m - height of the board.
• ### Method Detail
• #### getTour
public org.jgrapht.demo.KnightTour getTour(org.jgrapht.demo.TourType type,
boolean structured,
int shiftX,
int shiftY)
Generates a knight's tour that satisfies the input parameters. Warnsdorff's rule heuristic is an example of a greedy method, which we use to select the next cell to move, and thus may fail to find a tour. However, another greedy heuristic is used to prevent failing: in case of a tie we will select a cell with the largest euclidean distance from the center of the board. Such combination of greedy methods significantly increases our chances to find a tour.
Parameters:
type - of the tour.
structured - true if we want the tour to be structured, otherwise false.
shiftX - the value will be added to each cell's x-coordinate to reach effect of shifting.
shiftY - the value will be added to each cell's t-coordinate to reach effect of shifting.
Returns:
knight's tour. | 2020-10-26 00:51:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.197911337018013, "perplexity": 1189.3185371592378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890108.60/warc/CC-MAIN-20201026002022-20201026032022-00474.warc.gz"} |
https://zbmath.org/?q=an%3A0967.11016 | # zbMATH — the first resource for mathematics
Ordinary $$p$$-adic étale cohomology groups attached to towers of elliptic modular curves. II. (English) Zbl 0967.11016
In two previous papers [J. Reine Angew. Math. 463, 49-98 (1995; Zbl 0827.11025)] and [Comp. Math. 115, 241-301 (1999; Zbl 0967.11015)] the author studied the $$p$$-adic Hodge structure of the ordinary part of the (generalized) $$p$$-adic Eichler-Shimura cohomology groups. In those papers the $$\omega ^i$$-eigenspaces for the action of $${\mathbb F}_p^{\times}$$ with $$i\equiv 0, -1 \pmod{p-1}, \omega :{\mathbb F}_p^{\times}\rightarrow {\mathbb Z}_p^{\times}$$ the Teichmüller character, were excluded. In the present paper that restriction is removed. Whereas in the previous work certain ‘good quotients’ of (generalized) Jacobians of modular curves were employed, now also quotients which have bad reduction at $$p$$ enter the picture. The result is applied in the construction of large abelian $$p$$-extensions over cyclotomic $${\mathbb Z}_p$$-extensions of abelian number fields.
##### MSC:
11F33 Congruences for modular and $$p$$-adic modular forms 11F67 Special values of automorphic $$L$$-series, periods of automorphic forms, cohomology, modular symbols 11R23 Iwasawa theory
Full Text: | 2021-10-18 03:41:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7807682752609253, "perplexity": 1142.513467658288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00693.warc.gz"} |
http://math.stackexchange.com/questions/591389/question-involving-cauchy-sequences | # Question involving Cauchy sequences
Suppose $\left \{ a_n \right \}$ is a Cauchy sequence, and $\left \{ x_n \right \}$ is a sequence with a number $k>0$ such that $|x_n - x_m|\leq k|a_n - a_m|$ for all $n,m\in \mathbb{N}$. Is $\left\{ x_n \right\}$ necessarily a Cauchy sequence? Either prove or give a counter-example.
My attempt: I think the question is true. So since $\left \{ a_n \right \}$ is a Cauchy sequence, then for $\forall \epsilon >0$, there is an $N$ so that for all $n,m>N$ $|a_n - a_m| \leq \frac{\epsilon}{k}$.
So for any $n,m$, we get $|x_n - x_m|<\epsilon \Rightarrow |x_n - x_m|\leq k|a_n - a_m|$.
Is that it to the proof? Looks quite simple to me.
-
Did you mean to say that there is a number $k>0$ such that $|x_{n}-x_{m}|\leq k|a_{n}-a_{m}|$ for all $n,m\in\mathbb{N}$? If so your proof is almost correct -- you just need to switch the direction of the last implication. That is, for any $n,m\geq N$ we have $|x_{n}-x_{m}|\leq k|a_{n}-a_{m}|\Rightarrow |x_{n}-x_{m}|<\epsilon$. – Eric Dec 3 '13 at 19:35
Simple facts have simple proofs. – Carsten S Dec 3 '13 at 19:37
So is {$x_n$} necessarily a Cauchy? – user87274 Dec 3 '13 at 20:44
Yeah, that's right. The fact that $|a_n-a_m| \leq \frac{\epsilon}{k}$ is important: by transitivity, we then have $|x_n - x_m| \leq |a_n - a_m | \leq \frac{\epsilon}{k}$, so $|x_n - x_m| \le \epsilon$. | 2015-08-29 23:51:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9636200666427612, "perplexity": 141.71939829094407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064590.32/warc/CC-MAIN-20150827025424-00301-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/linked/17366 | # Linked Questions
11 questions linked to/from Why is $\pi$ equal to $3.14159...$?
18answers
7k views
### Intuitive Understanding of the constant “$e$”
Potentially related-questions, shown before posting, didn't have anything like this, so I apologize in advance if this is a duplicate. I know there are many ways of calculating (or should I say "...
7answers
13k views
### $\pi$ in arbitrary metric spaces
Whoever finds a norm for which $\pi=42$ is crowned nerd of the day! Can the principle of $\pi$ in euclidean space be generalized to 2-dimensional metric/normed spaces in a reasonable way? For ...
10answers
5k views
### Symbol for “probably equal to” (barring pathology)?
I am writing lecture notes for an applied statistical mechanics course and often need to express the notion that something is very probably true for functional forms found in the wild, without ...
20answers
3k views
### Interesting Math for 3-graders
I'm supposed to give a 30 minutes math lecture tomorrow at my 3-grade daughter's class. Can you give me some ideas of mathemathical puzzles, riddles, facts etc. that would interest kids at this age? ...
1answer
2k views
### Are $\pi$ and $e$ algebraically independent?
Update Edit : Title of this question formerly was "Is there a polynomial relation between $e$ and $\pi$?" Is there a polynomial relation (with algebraic numbers as coefficients) between $e$ or $\pi$ ?...
4answers
814 views
### Why are all circles similar? (Why is $\pi$ a constant?) [duplicate]
I just know that I'm going to look like a crackpot, but here goes. The number $\pi$ is defined as the ratio of the circumference of a circle to its diameter. So there is an assumption here that all ...
5answers
414 views
### The origin of $\pi$
How was $\pi$ originally found? Was it originally found using the ratio of the circumference to diameter of a circle of was it found using trigonometric functions? I am trying to find a way to find ...
4answers
285 views
### Philosophical question about Pi and connections in maths
Pi is the ratio of circumference of a circle to its diameter. Okay. Got that, easy enough. Now, why does the following equality hold true? \frac{\pi}{4} = 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{...
2answers
307 views
### Is the value of $\pi$ in 2d the same in 3d? [closed]
I am starting with my question with the note "Assume no math skills". Given that, all down votes are welcomed. (At the expense of better understanding of course!) Given my first question: What is ...
6answers
282 views
### Not $\pi$ - What if I used $3$? Teaching $\pi$ discovery to K-6th grade
So, in ancient Mesopotamia they knew that they didn't really have the correct number ($\pi$) to determine attributes of a circle. They rounded to $3$. If you acted as though $\pi=3$, what shape would ...
1answer
745 views
### Why is treating $i$ as a constant in integration, valid?
Why do we, when doing integrals like $\int i\cos xdx$, treat $i$ to be a constant? Is there any proof? Wolfram gives the answer simply as $i\sin x+\text{[constant]}$. I have a confusion, because ... | 2019-07-21 14:46:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9324885606765747, "perplexity": 558.225700970755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527048.80/warc/CC-MAIN-20190721144008-20190721170008-00544.warc.gz"} |
http://www.acmerblog.com/hdu-3774-ropes-6769.html | 2015
04-10
# Ropes
When climbing a section or “pitch”, the lead climber ascends first, taking a rope with them that they anchor to the rock for protection to ascend. Once at the top of a pitch, the lead climber has the second climber attach to the rope, so they can ascend with the safety of the rope. Once the second climber reaches the top of the pitch, the third attaches, and so on until all the climbers have ascended.
For example, for a 10 meter pitch and 50 meter rope, at most 6 climbers could ascend, with the last climber attaching to the end of the rope. To ascend safely, there must be at least 2 climbers and the rope must be at least as long as the pitch.
This process is repeated on each pitch of the climb, until the top is reached. Then to descend, the climbing rope is hung at its midpoint from an anchor (each half must reach the ground).
The climbers then each rappel from this rope. The rope is retrieved from the anchor by pulling one side of the rope, slipping it though the anchor and allowing it to fall to the ground.
To descend safely, the rope must be at least twice as long as the sum of the lengths of the pitches.
For example, a 60 meter rope is required to rappel from a 30 meter climb, no matter how many climbers are involved.
Climbing ropes come in 50, 60 and 70 meter lengths. It is best to take the shortest rope needed for a given climb because this saves weight. You are to determine the maximum number of climbers that can use each type of rope on a given climb.
The input consists of a number of cases. Each case specifies a climb on a line, as a sequence of pitch lengths as in:
N P1 P2 … PN
Here N is the positive number of pitches, with 1 ≤ N ≤ 100, and Pk is the positive integer length (in meters) of each pitch, with 1 ≤ Pk ≤ 100. The last line (indicating the end of input) is a single 0.
The input consists of a number of cases. Each case specifies a climb on a line, as a sequence of pitch lengths as in:
N P1 P2 … PN
Here N is the positive number of pitches, with 1 ≤ N ≤ 100, and Pk is the positive integer length (in meters) of each pitch, with 1 ≤ Pk ≤ 100. The last line (indicating the end of input) is a single 0.
1 25
2 10 20
0
3 3 3
0 4 4
/*
水题,不过题目真难读懂 – -I。
len代表绳长、sum代表所有pitch的高度之和、max=最高的pitch,
1、若len<2*sum,则不行;
2、最多的人数=(50(60/70)/max)+1;
*/
#include"stdio.h"
int judge(int x,int sum,int max)
{
if(sum*2>x) return 0;
return x/max+1;
}
int main()
{
int n;
int p[111],max,sum;
int i;
while(scanf("%d",&n),n)
{
max=0;
sum=0;
for(i=0;i<n;i++)
{
scanf("%d",&p[i]);
sum+=p[i];
if(p[i]>max) max=p[i];
}
printf("%d ",judge(50,sum,max));
printf("%d ",judge(60,sum,max));
printf("%d\n",judge(70,sum,max));
}
return 0;
}
1. 第一句可以忽略不计了吧。从第二句开始分析,说明这个花色下的所有牌都会在其它里面出现,那么还剩下♠️和♦️。第三句,可以排除2和7,因为在两种花色里有。现在是第四句,因为♠️还剩下多个,只有是♦️B才能知道答案。 | 2017-02-19 18:40:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5329459309577942, "perplexity": 1381.4755199765232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00443-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://double.thematlabprojects.com/2021/10/14/how-to-without-bayesian-statistics/ | # How To Without Bayesian Statistics
How To Without Bayesian Statistics” (link] [in PDF] JASON BRONX’S TED Talk on the Big Data has a couple of significant differences I learned from his talk. 1. I discovered Bob’s Law. Bob’s Law was something that drove people crazy, because it basically said that a few points in time don’t make a whole lot of sense. By taking a simple her latest blog of reasoning, we could model correlations that produce no-fuss features — but is this really a problem? So, a correlation has a special method: Its power! So lets do.
## How To Statistical Sleuthing Through Linear Models The Right Way
Here’s what Bob’s Law looks like in action: – The More about the author of a small correlation is relatively small compared to the chance of a larger correlation in the same data. The probability of a small correlation is about the same as the chance of greater small correlations. In different settings, those should produce the same result, but they all our website something spectacular for an expert listener. If we could model a correlation across all measurements and see how how long each measurement lasted, and how many great correlations, we’d be able to calculate the time needed for each measurement from a real world chart. But what if you want more quantitative output into your studies? You can’t.
## Dear : You’re Not CMS EXEC
To web that, we need tools like FNB to do a much more sophisticated thing. For starters, there is the FNB parser built for you by Adam Davis, who also wrote an excellent blog. Note: I have yet to experiment with this parser but would love to! 4. Bayesian metrics are finite. A billion times less than we estimate.
## 3 Things You Didn’t Know about Klerer May System
The Bayesian statistic is fixed. Bayesian measures are defined as an arbitrary spread, like the same as a small difference, like a positive difference. In fact, for every measure in the dataset, there is an estimate of the magnitude of the variation that occurs in the specific measure, the Bayesian statistic. So for every mink I run through those models, in the future I can use a sample size of 10 for the Bayesian calculation, using a regression assumption $\sum_{M=0, 1}^{Y}$ and some other estimation function (for example the chi-square one, $z$). For any single and close quantile, in Figure 4 above, the Bayesian measurement shows Figure 4.
## Everyone Focuses On Instead, Data Management
The Bayesian estimate of a linear regression as the initial estimate of the Bayesian weight in square trials squared is $p$ The Bayesian estimate of the Bayesian sum sum analysis is p = 1(0, 1)(0, 1)=1 because both estimates are $p$. So, the estimates are well known from the everyday use of N+1 estimates in our databases to be reasonable. So in fact, the worst part of dealing with Bayesian data (in my experience) is I have to settle for absolute certainty. Even when the available input has the greatest power distribution we have, it reduces to a non-zero bound. This happens on many large datasets, and in particular the GIS package for TIS images.
## 3 Poison Distribution That Will Change Your Life
(Since if we’re interested see here now absolute dummies, we also have to consider dummies that are only 30% likely to be a priori “spurious” distributions, which means that the generalization problem comes after only the finite component measurements!) internet Bayesian data is not Visit Website | 2021-10-21 10:54:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7419109344482422, "perplexity": 935.4852066838558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00106.warc.gz"} |
http://mathoverflow.net/revisions/80986/list | 2 Endollared the LaTeX.
I want to know the following is well-known or not:
Let X be a metric space with Hausdorff dimension \alpha. $\alpha$. Then for any \beta $\beta < \alpha, alpha$, X contains a closed subset whose Hausdorff dimension is \beta.$\beta$.
1
# Question on geometric measure theory
I want to know the following is well-known or not:
Let X be a metric space with Hausdorff dimension \alpha. Then for any \beta < \alpha, X contains a closed subset whose Hausdorff dimension is \beta. | 2013-05-23 10:40:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8740335702896118, "perplexity": 479.4349116808992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703227943/warc/CC-MAIN-20130516112027-00042-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/3235191/side-length-of-largest-equilateral-triangle-to-fit-in-a-rectangle | # Side length of largest equilateral triangle to fit in a rectangle
I was trying to print out the largest possible equilateral triangle on a standard sheet of paper (8.5 by 11 inches) and got sidetracked into the following question: what is the maximum possible side length of an equilateral triangle to fit in a rectangle of size $$l$$ by $$w$$ ($$l \le w$$), and how would that equilateral triangle be placed in the rectangle?
I found the case of a square easily, but wasn't able to find the answer for a rectangle. I tried placing one vertex in a corner and the other two vertices on sides, but that gives me a solution not even on the rectangle.
Edit: I believe that when the ratio between $$l$$ and $$w$$ is less than a certain value, then a vertex is on a corner. Otherwise, I think the triangle will be set up so a base is on a side of the rectangle.
The equilateral triangle with the largest length should first be created at the corner. This is because any other equilateral triangle that fits can be translated such that one of its vertices is a corner of the rectangle.
Now, we need to split the problem into two cases:
1) $$l\ge \frac{w}{\sqrt3}$$: The largest triangle is the one with length $$l$$ (in dimension $$l$$) and height $$l\sqrt{3}$$ (in dimension $$w$$). If we try to use a different angle, we will only get shorter sides.
2) $$l\le\frac{w}{\sqrt{3}}$$: In this case, we want one vertice of the triangle to be one of the rectangle's vertices and the other two on the sides of the rectangle such that none contain the vertice shared by the rectangle and triangle.
One method to approach this is imaginary coordinates. First, let us put the vertice that the rectangle and triangle share as the origin. We can set up the rectangle's coordinates as $$(0,0),(0,l),(l,w),(0,w)$$.
Let $$(l,x)$$ be the point at which the triangle meets one side. ($$x$$ is an unknown variable and $$l$$ is the length.) Therefore, by imaginary coordinate rotation, we get that the $$y$$-coordinate of the point rotated $$60^\circ$$ counterclockwise about the origin is $$\frac{x}2+\frac{l\sqrt{3}}2$$, which must also be $$w$$. (This is because the rotation of that point $$60^\circ$$ is supposed to be the third vertice of the triangle and is on the top side of the rectangle.) From here, we get $$x+l\sqrt{3}=2w\rightarrow x=2w-l\sqrt{3}$$.
The length of one side is $$\sqrt{l^2+(2w-l\sqrt{3})^2}$$ via distance formula.
Let w be the shorter side.
Now, it's obvious that (you stated it as well) the triangle will be in the middle of the rectangle, not touching its vetices.
Suppose the biggest triangle is kept at an angle theta to the verticle as shown. But it's apparent that when rotated in counter clock wise direction some space is left on the top(possible extension of the side length). This shows that it couldn't have been the longest possible side.
Thus, in conclusion, the biggest possible eq. Triangle is kept with one side along the longer side and the side length being w/2 (where w is the shorter side)
This can also be deduced by starting as above and the side being w·sec(theta), theta ranging from 0 to 30deg.
• But this isn't always true, such as in the case of a square. Look at the bottom of this page mathworld.wolfram.com/EquilateralTriangle.html – automaticallyGenerated May 22 at 5:32
• Yes, that's indeed the case. However, we can take advantage of rotation since it's a rectangle and rotate the biggest triangle fixed at a vertice to see if it's the one, which is not possible for a square since the sides are equal and there's no room(if you know what I mean). (Argument very loosely placed, sorry about that) – Mike Karter May 22 at 5:39 | 2019-07-17 08:22:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8055641055107117, "perplexity": 193.46958615188888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525133.20/warc/CC-MAIN-20190717081450-20190717103450-00496.warc.gz"} |
https://borneomath.com/sample-paper-final-fmo-grade-3/ | # Sample Paper Final FMO Grade 3
Berikut ini adalah soal beserta kunci jawaban Fermat Mathematic Olympiad (FMO) 2021 grade 3 (Sc: Edukultur Indonesia)
1. What should be filled in the question mark?
A.
B.
C.
D.
E.
2. How many triangles are there in the figure below?
A. 6
B. 9
C. 13
D. 14
E. 15
3. Given the timetable of a circular bus travelling around the city. Lucy wants to
travel from Museum to the Cathedral. If she is waiting for the bus beside the
Museum at 12:15, how many minutes does she have to spend on the bus?
A. 48
B. 34
C. 52
D. 55
E. 50
4. Candace stacked identical boxes to build the shape below. At least how many
boxes did she use?
A. 23
B. 20
C. 21
D. 22
E. 19
5. There were two types of egg boxes. Each box contains either 8 eggs or 12 eggs. Anna bought some boxes. Which answer below CANNOT be the total number of eggs bought by Anna?
A. 16
B. 20
C. 28
D. 26
E. 24
6. The scores of 5 students are recorded in the chart with equally-spaced lines
below. The greatest score difference between two friends are 16. Find the total
sum of scores of 5 students.
A. 80
B. 90
C. 64
D. 48
E. 100
7. The shape including 14 cubes below is painted all over the surface (even the
bottom). How many cubes having 4 faces painted are there?
A. 6
B. 7
C. 8
D. 9
E. 10
8. Find the missing number in the table below.
A. 43
B. 32
C. 24
D. 22
E. 34
9. Grandma has some candy jars. Each jar contains 2 apple candies and 4 banana
candies OR 3 apple candies and 3 banana candies. Given that she has 17 banana
candies. How many apple candies does she have?
A. 14
B. 13
C. 15
D. 12
E. 11
10. Teacher has a square piece of paper. She folds it in half three times then cuts
out two triangles as the figure below. What does the paper look like after being
cut?
A.
B.
C.
D.
E.
11. A box contains 2 pencils, 3 red ball pens, 4 blue ball pens and 5 black ball pens. Amy cannot look into the box but she wants to take out 3 ball pens of the same color. What is the least number of pens she needs to take out to make sure?
A. 7
B. 3
C. 9
D. 12
E. None of the above
12. Which figure needs the least paint to be completely filled with color?
A.
B.
C.
D.
E.
13. Candace uses 3 squares and 1 rectangle to form a bigger square as below. If the perimeter of the smallest square is 12cm, what is the perimeter of the shade
rectangle in cm?
A. 18
B. 24
C. 36
D. 40
E. None of the above
14. Find the suitable number to replace the question mark.
A. 24
B. 26
C. 12
D. 18
E. None of the above
15. Based on the pattern below, find the sum of all numbers in $$20^{th}$$ row.
A. 314
B. 74
C. 326
D. 341
E. None of the above
16. 1 people living on an island. Some of these people are truth-tellers and the others are liars. The truth-tellers always tell the truth whereas the liars always lie. Each day, one of the people says “When I have left the island the number of truth-tellers will be the same as the number of liars.” Then this person leaves the island. After 2021 days there is no longer anybody living in the island. How many truth-tellers were living there in the beginning?
kunci jawaban | 2023-03-26 11:34:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4855056405067444, "perplexity": 1629.6082220227122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00129.warc.gz"} |
https://www.gradesaver.com/textbooks/math/geometry/CLONE-df935a18-ac27-40be-bc9b-9bee017916c2/chapter-6-review-exercises-page-315/24e | ## Elementary Geometry for College Students (7th Edition)
Published by Cengage
# Chapter 6 - Review Exercises - Page 315: 24e
BC=4
#### Work Step by Step
BC(AB+BC)=CF$^2$ 5BC+BC$^2$=36 BC$^2$+5BC-36=0 (BC+9)(BC-4)=0 BC=4
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2021-02-25 17:14:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42271265387535095, "perplexity": 8126.7449582641275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351374.10/warc/CC-MAIN-20210225153633-20210225183633-00090.warc.gz"} |
http://www.phy.ntnu.edu.tw/ntnujava/index.php?topic=1677.0 | NTNUJAVA Virtual Physics LaboratoryEnjoy the fun of physics with simulations! Backup site http://enjoy.phy.ntnu.edu.tw/ntnujava/
July 23, 2019, 05:59:24 am
Never underestimate others. Never overestimate oneself. ...Wisdom
Pages: [1] Go Down
Author Topic: Solution for two linear equations (Read 3422 times) 0 Members and 1 Guest are viewing this topic. Click to toggle author information(expand message area).
ahmedelshfie
Moderator
Hero Member
Offline
Posts: 954
« Embed this message on: May 21, 2010, 12:28:44 am » posted from:,,Brazil
This applet is Solution for two linear equations
Created by prof Hwang Modified by Ahmed
Original project Solution for two linear equations
Assume there are two linear equations:
$a_1 x + b_1 y +c_1=0$
$a_2 x + b_2 y +c_2=0$
The solution is $x=\frac{b1*c2-c1*b2}{a1*b2-a2*b1}$, $y=\frac{a2*c1-a1*c2}{a1*b2-a2*b1}$
You can drag the circle to change the slope of the linear equation or drag the square to change the offset of the linear equations.
Embed a running copy of this simulation
Embed a running copy link(show simulation in a popuped window)
Full screen applet or Problem viewing java?Add http://www.phy.ntnu.edu.tw/ to exception site list
• Please feel free to post your ideas about how to use the simulation for better teaching and learning.
• Post questions to be asked to help students to think, to explore.
• Upload worksheets as attached files to share with more users.
Let's work together. We can help more users understand physics conceptually and enjoy the fun of learning physics!
Solution for two linear equations.gif (20.41 KB, 557x564 - viewed 316 times.) Logged
Pages: [1] Go Up
Never underestimate others. Never overestimate oneself. ...Wisdom
Related Topics Subject Started by Replies Views Last post THE FINAL QUANTUM SOLUTION Wave Janus20 0 14954 November 29, 2005, 03:22:23 pm by Janus20 ejrconsole file does not run Information and Download Fred Chuit 6 14234 January 12, 2015, 06:51:35 pm by whitecrow4 Ejs Open Source Superposition of 2 Waves generated by equations Collaborative Community of EJS lookang 3 9767 August 14, 2012, 01:24:26 pm by lookang Solving linear equations misc Fu-Kwun Hwang 0 6750 March 24, 2009, 08:02:50 am by Fu-Kwun Hwang Solution for two linear equations misc Fu-Kwun Hwang 0 7082 June 18, 2009, 03:37:19 pm by Fu-Kwun Hwang
Page created in 0.366 seconds with 24 queries.since 2011/06/15 | 2019-07-22 22:59:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1740562617778778, "perplexity": 11322.100881037946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528290.72/warc/CC-MAIN-20190722221756-20190723003756-00512.warc.gz"} |
http://allmaths.blogspot.com/2011/11/nss-mathematics-co-geom-techniques.html | ## Saturday, 19 November 2011
### NSS Mathematics: Co-geom techniques
This is a question from the latest test. A, B H are points (-6,0), (4,0), (0,10) respectively. C1 and C2 are circles with AO and BO as diameter respectively, AH and BH meet C1 and C2 at F and G respectively.
ai) Show OGHF concyclic.
ii) Show AFGB concyclic.
bi) Find coordinate of G.
ii) Find circle containing AFGB.
We will focus on part b as it includes all method that we can solve a problem related to circle in the co-geom plane.
bi) Find coordinate of G.
Method 1: Circle-line intersection
Recall the equation of C2: $(x-2)^2+y^2=4$
Equation of line BH $y=mx+c$, putting $(0,10), (4,0)$ we get $y=-\frac{5}{2}x+10$.
Putting equation of line BH into equation C2: $(x-2)^2+(-\frac{5}{2}x+10)^2=4$.
By simplification we have $\frac{29}{4}x^2-54x+100=\frac{1}{4}(x-4)(29x-100)=0$. (4,0) is B so x=4 is rejected. Then $x=\frac{100}{29}$, and we can easily get $y=(\frac{100}{29})(\frac{-5}{2})+10=\frac{40}{29}$.
Method 2: (the easiest one) perpendicular line interesection
Observe that OG is perpendicular to BH due to angles in semi-circle,
$m_{BH}=\frac{-5}{2}$, $m_{OG}=-(m_{BH})^{-1}=\frac{2}{5}$. Since OG passes through the origin, $OG:y=\frac{2}{5}x$.
The intersection between line BH and line OG:
$y=\frac{-5}{2}x+10=\frac{2}{5}x$, we can easily obtain $(x,y)=(\frac{100}{29},\frac{40}{29})$.
Method 3: perpendicular line-circle intersection
The intersection between OG and C2 might me a bit easier than intersection between BH and C2:
$(x-2)^2+y^2=(x-2)^2+(\frac{2}{5}x)^2=4$, x = 0 (rej. since it's O) and the same result as before.
Method 4: trigonmetry method
Observe the triangles HOB, HGO and OBG are similar. Let angle GOB = angle HOB be $\theta$ and the coordinate of G is $(OG_x,OG_y)$.
$OG_x=|OG|\cos \theta = |OB|\cos ^2 \theta$, similarly $OG_y=|OG|\sin \theta =|OB|\sin \theta \cos \theta$.
By definition we have $\cos \theta = \frac{10}{\sqrt{116}}$, $\sin \theta = \frac{4}{\sqrt{116}}$, by putting |OB| = 4 we have the same result.
(Note: |XY| is the length of line segment XY.)
bii) Find the equation of circle:
Method 1: general equation of circle
Assume the equation is $(x-x_0)^2+(y-y_0)^2=r^2$.
Putting point A and B we have $x_0=-1$, therefore the equation becomes $(x+1)^2+(y-y_0)^2=r^2$ and $25+y_0^2=r^2$.
Putting $(x,y)=(\frac{100}{29},\frac{40}{29})$, we have $(\frac{100}{29}+1)^2+(\frac{40}{29}-y_0)^2=r^2=25+y_0^2$
After a bunch of complex calculation (ugly number), we have $(x_0,y_0,r^2)=(-1,-1.2,26.44)$, therefore $(x+1)^2+(y+1.2)^2=26.44$ is the desired equation.
Method 2: Perpendicular bisector method
Recall the way you determine the circumcenter: it's the intersection point among three perpendicular bisectors. Therefore if four points are concyclic, choose two perpendicular bisector of them, then insection point is the center of circle. The radius can be easily determined by distance formula.
The perpendicular bisector of AB is trivially $x=-1$.
The perpendicular bisector of BG is given by the locus of P that $PG=PB$
$(x-\frac{100}{29})^2+(y-\frac{40}{29})^2=(x-4)^2+y^2$
$y=\frac{2}{5}x-\frac{4}{5}$
By putting x = -1, we have y = -6/5, and $OA^2=(-6+1)^2+(-6/5)^2=26.44$, and the same result is given.
Conclusion:
1) Change circle problems into linear problems whenever possible.
2) Trigonometry is powerful when there's perpendicular pair of lines.
3) Finding intersection between circles is stupid.
4) Find the equation of circle in terms of $x^2+y^2+ax+by+c=0$ is stupid.
5) Perpendicular bisector is our new method to find equation of circle.
fin. | 2017-06-24 07:09:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8820791244506836, "perplexity": 1542.2684311398655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320227.27/warc/CC-MAIN-20170624064634-20170624084634-00701.warc.gz"} |
https://kb.osu.edu/dspace/handle/1811/20336 | # Knowledge Bank
## University Libraries and the Office of the Chief Information Officer
The Knowledge Bank is scheduled for regular maintenance on Sunday, April 20th, 8:00 am to 12:00 pm EDT. During this time users will not be able to register, login, or submit content.
# INTRAMOLECULAR DYNAMICS OF THE $N = 2$ HF STRETCHING OVERTONE POLYAD OF $(HF)_{2}$ STUDIED BY HIGH-RESOLUTION cw-DIODE LASER CAVITY RING-DOWN SPECTROSCOPY IN A PULSED SLIT JET
Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/20336
Files Size Format View
2002-FA-08.jpg 340.3Kb JPEG image
Title: INTRAMOLECULAR DYNAMICS OF THE $N = 2$ HF STRETCHING OVERTONE POLYAD OF $(HF)_{2}$ STUDIED BY HIGH-RESOLUTION cw-DIODE LASER CAVITY RING-DOWN SPECTROSCOPY IN A PULSED SLIT JET Creators: Hippler, Michael; Oeltjen, Lars; Quack, Martin Issue Date: 2002 Abstract: The $(HF)_{2}$ hydrogen bonded dimer has been a prototype system for high-resolution spectroscopy since the pioneering studies of its microwave spectra by Dyke, Howard, and Klemperer in $1972.^{1}$ Subsequently the HF stretching fundamentals were studied in $1983,^{2}$ a low frequency fundamental analyzed in the far infrared in $1987,^{3}$ HF stretching overtone spectra investigated by FTIR $spectroscopy^{4}$ and finally full dimensional potential energy hypersurfaces developed of near to spectroscopic $accuracy.^{5,6}$ All these were first'' achievements prototypical for any type of hydrogen bonded dimer of this kind. Here we present the first study of the $N = 2$ HF stretching overtone polyad by very high resolution cw-diode laser cavity ring-down spectroscopy in pulsed slit jet expansions developed $recently^{7}$ (instrumental bandwidth about 1 MHz corresponding to a resolving power of $2 \times 10^{8}$). An analysis of all polyad subbands in terms of spectroscopic constants, tunneling splittings, Lorentzian predissociation and Doppler contributions to the linewidths will be $presented.^{8}$ The results agree well with full six-dimensional $calculations^{9}$ but disagree with simple models or approximate calculations that have been presented in the past. URI: http://hdl.handle.net/1811/20336 Other Identifiers: 2002-FA-08 | 2014-04-19 02:13:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.639714241027832, "perplexity": 7504.231036821221}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://plainmath.net/pre-algebra/103001-what-is-the-reciprocal-of-4-11 | Alan Wright
2023-02-24
What is the reciprocal of $\frac{4}{11}$?
A. $\frac{4}{11}$
B. 11
C. 4
D. $-\frac{4}{11}$
Darien Jennings
A fraction's reciprocal is created by switching the numerator and denominator.
Reciprocal of $\frac{4}{11}=\frac{11}{4}$ | 2023-03-28 02:26:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9798338413238525, "perplexity": 8094.151629342262}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00371.warc.gz"} |
http://frikimaths.blogspot.com/2012/09/ | ## September 27, 2012
### Let's get started with this new school year!!
Welcome back guys!
The first challenge it is quite simple: Try to explain the following situations:
- What values of $a$ would make the expression $\sqrt{a} < a$ be true?
- What values of $a$ would make the expression $\sqrt{a} > a$ be true?
And do not worry about the Radicals! In October an old friend of yours will be baaaack!! :D | 2017-06-26 01:45:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19772401452064514, "perplexity": 888.9059752953117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320666.34/warc/CC-MAIN-20170626013946-20170626033946-00129.warc.gz"} |
http://math.stackexchange.com/questions/148524/mathematical-formula-for-biological-phenomena | # Mathematical formula for biological phenomena?
Math is strongly intervened with Physics and Chemistry: it's used for an assortment of calculations and experiments. However, I find that Biology (at least elementary Biology) is severely lacking in mathematical models. Do mathematical models of biological phenomena (i.e. cell reproduction, anatomical systems etc.) exist? Are they simply too complicated and inapplicable to be taught in general classes, or have we just not been able to attribute mathematical models to these phenomena?
-
Yes, mathematical models of biology abound. There's even an entire branch of math (Mathematical Biology) that deals with creating and studying models of biological phenomenon. At least at my university, the first courses on this are taught at the senior undergraduate level. This is not "mathematical logic", though. It's mathematical modeling. – Arturo Magidin May 23 '12 at 2:12
Models do exist, and the study of such models is a very hot field right now called mathematical biology. But explicit calculations are not generally feasible, as the systems in question are ** unbelievably** complicated. Take the nervous system for example. The human nervous system has approximately 100 billion neurons. In contrast, the C. elegans has 302 neurons, and only recently have computers and graph-theoretic algorithms become powerful enough to analyze the system. – Alex Becker May 23 '12 at 2:16
You might look at "Mathematical Models in Biology" by my colleague Leah Edelstein-Keshet ec-securehost.com/SIAM/CL46.html – Robert Israel May 23 '12 at 2:22
Also Martin Novak's text on evolutionary dynamics might be of interest: amazon.com/Evolutionary-Dynamics-Exploring-Equations-Life/dp/… – student May 23 '12 at 4:01
Mathematical biology is a very active field. As a starting point, you might look at the Wikipedia article on Mathematical and theoretical biology. The Society for Mathematical Biology publishes the Bulletin of Mathematical Biology. There are also a Journal of Mathematical Biology and the open access Journal of Mathematical Neuroscience, both published by Springer. In the August 1010 Notices of the AMS there’s a seven-page essay on What Is Mathematical Biology and How Useful Is It? by Avner Friedman.
It’s true that the subject has only relatively recently percolated into undergraduate curricula, though I remember teaching some very elementary modeling of epidemics back in the 70s. For one thing, modern computing has made parts of it considerably more accessible than they used to be. But it’s getting there. Links on this Math Archives page show that there are courses in aspects of the subject at the undergraduate as well as the graduate level. Indeed, the Society for Mathematical Biology lists several schools offering undergraduate majors in some sort of mathematical biology. The list isn’t complete, either: the University of Houston also offers such a major, as does the University of Pittsburgh, and McGill offers a joint major in biology and mathematics. A Biologist's Guide to Mathematical Modeling in Ecology and Evolution, by Sarah P. Otto and Troy Day, is expressly designed to make the techniques of mathematical modeling available to students and biologists who don’t already have more mathematical background than first-year calculus.
And of course biostatistics has become an indespensable part of biology and medicine and is increasingly showing up in undergraduate statistics programs.
-
Here is one of my favorites, the Lotka-Volterra equations used to model predator/prey relations: http://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra_equation.
Not to mention the essential use of statistics in biology and medicine to tease out correlations only observable in large data sets. In fact some of the most important statistical tests used today were developed by the biologist and mathematician Ronald Fisher: http://en.wikipedia.org/wiki/Ronald_Fisher
However the comparative lack of mathematical methods in biology compared to other scientific fields is something worth thinking about. One possible answer is that full blown biological systems (with perhaps hundreds, thousands, or more interacting elements) are just too complicated to accurately model using mathematics. This I think is quickly becoming outdated I think, especially with the rise of computational tools and large data sets in biology. A pessimist about progress in biology might say that biology is not just well developed enough. Chemistry and physics too had long historical phases where very little was done using quantitative methods. Pushing back against this, people like Peter Godfrey-Smith have argued that perhaps this idea drawn from the history of physics and chemistry of "mature" sciences needs to reexamined. For biology has been quite successful, practical and interesting even without heavy use of mathematics. (This is more of an aside to your original question but interesting nonetheless.)
For a cutting edge mathematical model, there are quite serious attempts to model complicated biological systems mathematically. One intriguing example is the Blue Brain project in Switzerland http://bluebrain.epfl.ch/. Their first major goal was to model a rat neocortical column using one virtual neuron for every real neuron (a real column has something like 10,000 neurons and $10^8$ synapses). Needless to say there is lots of mathematics and computation involved in this project!
-
It seems fitting that an answer emphasizing the important role of statistics is given by "student". – KCd May 23 '12 at 3:09
Oh yes, why fitting? – student May 23 '12 at 3:24
@student: Ever heard of Student's $t$ distribution? – Arturo Magidin May 23 '12 at 3:50
hahah, oh yes I have – student May 23 '12 at 3:51 | 2014-03-12 03:30:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4419805109500885, "perplexity": 1231.6117603238517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394021251996/warc/CC-MAIN-20140305120731-00032-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://scratchpad.wikia.com/wiki/M07M3 | # M07M3
224,290pages on
this wiki
M07M3 is a short name for the third problem in the Classical Mechanics section of the May 2007 Princeton University Prelims. The problem statement can be found in the problems list. Here is the solution.
(a)
Define x as the coordinate that goes downward and y as the coordinate that goes to the left. Take a small piece of string and write its horizontal and vertical equations of motion:
$\tau cos\theta(x)=\tau cos(\theta(x+dx))+\frac{m}{l}dx g$
$\tau sin(\theta(x))\theta'(x)=\frac{m}{l} g \quad Vertical$
$\frac{m}{l}dx\ddot{y}=\tau sin[\theta(x)+\theta'(x)dx]-\tau sin\theta(x)=\tau cos\theta(x) \theta'(x) dx$
$\frac{m\ddot{y}}{l}=\tau cos\theta \theta' \quad Horizontal$
In all these equations, $\theta$ is defined as the angle that a differential piece of string makes with the vertical line. We can express it in terms of y as:
$tan\theta=y'$
$sec^2\theta \theta'=y''$
Plug this in to get:
$\tau sin\theta y'' cos^2\theta=\frac{mg}{l}$
$\frac{m\ddot{y}}{l}=\tau y'' cos^3\theta$
At the top we have y=0, and at the bottom:
$\tau cos\theta(l)=Mg$
$-\tau sin\theta(l)=M\ddot{y}(l)$
For small oscillations, these become:
$\frac{m\ddot{y}}{l}=\tau y''$
$\tau=Mg$
Failed to parse (unknown function\label): \label{bottom}-\tau y'(l)=M\ddot{y}(l)
Failed to parse (unknown function\label): \label{top}y(0)=0
(b)
Let $v^2 \equiv \frac{\tau l}{m}$, and let $k\equiv\omega/v$. Then a wavemode with frequency $\omega$ has a form:
$y(x,t)=Asin(\omega t+\phi)sin(kx+\varphi)$
Failed to parse (unknown function\ref): y(0,t)=Asin(\omega t+\phi)sin\varphi=0 \rightarrow \varphi=0 \quad (\ref{top})
Failed to parse (unknown function\ref): kgcos(kl)=\omega^2sin(kl) \rightarrow (kl) tan(kl)=\frac{m}{M} \quad (\ref{bottom})
$(\omega l/v)tan(\omega l/v)=\frac{m}{M}$
(c)
To lowest order we get $\omega=0$, but then there is no motion. To first order in $m/M$, we get:
$(kl)^2=\frac{m}{M}$
$\omega=\sqrt{\frac{g}{l}}$
The whole system swings like a pendulum of length l. The next lowest frequency will be for the case when (kl) is not small but tan(kl) is:
$kl \approx \pi +\epsilon$
$\omega=\pi\sqrt{\frac{Mg}{ml}}$
In this case, the point mass remains fixed, acting as a node, while the string oscillates back and forth. You can see that the frequency is much larger than the one for pendulum motion. | 2017-03-24 21:56:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 24, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9622530937194824, "perplexity": 959.4041269362123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188623.98/warc/CC-MAIN-20170322212948-00290-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://alacaze.net/publication/handbook-prob/ | Frequentism in probability theory
Type
Publication
The Oxford Handbook of Probability and Philosophy | 2021-09-18 19:39:27 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9867475628852844, "perplexity": 14180.471580657155}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056572.96/warc/CC-MAIN-20210918184640-20210918214640-00361.warc.gz"} |
https://mathoverflow.net/questions/302361/regularity-of-the-jacobian-of-a-w2-n-sobolev-mapping | # Regularity of the Jacobian of a $W^{2,n}$ Sobolev mapping
Given a mapping in the Sobolev space $f\in W^{2,n}_{\rm loc}(\mathbb{R}^n,\mathbb{R}^n)$ I would like to know what is the Sobolev regularity of the Jacobian $J_f=\operatorname{det} Df$.
It is well known and easy to prove that if $u,v\in W^{1,p}\cap L^\infty(\mathbb{R}^n)$, then $uv\in W^{1,p}\cap L^\infty$. Indeed, product of a bounded and an $L^p$ function is in $L^p$ and the same argument applies to the derivatives $\partial_i(uv)=(\partial_i u)v+v\partial_i\in L^p$. Now if $u\in W^{1,n}$ than $u$ has very high integrability (Trudinger's inequality) so if $u,v\in W^{1,n}$ (no longer bounded), then $uv\in W^{1,n}$ must belong to some Orlicz-Sobolev space slightly larger than $W^{1,n}$. Thus my question is:
Let $u_1,\ldots,u_n \in W^{1,n}(B^n(0,1))$. Find an optimal (or close to optimal) Orlicz-Sobolev space $W^{1,P}$ for some Young function $P$ such that $u_1\cdot\ldots\cdot u_n\in W^{1,P}$.
In fact I would like to know if one can find $P$ so that it satisfies the so called divergence condition: $$\int_0^1 \frac{P(t)}{t^{n+1}}\, dt =\infty.$$ is satisfied.
Since the derivatives of $f\in W^{2,n}(\mathbb{R}^n,\mathbb{R}^n)$ belong to $W^{1,n}$ such a result will imply that $J_f=\det Df\in W^{1,P}.$
A form of Hölder's inequality in Orlicz spaces asserts that, if $f_1\in L^{A_1},\ldots,f_n\in L^{A_n}$, and $B$ is such that $$A_1^{-1}(t)\cdots A_n^{-1}(t)\leq cB^{-1}(t) \quad \text{for t\geq 0},\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$$ for some constant $c$, then $f_1 f_2\cdots f_n\in L^B$ and $$\Vert f_1 f_2\cdots f_n\Vert_{L^B}\leq C\Vert f_1\Vert_{L^{A_1}}\cdots\Vert f_n\Vert_{L^{A_n}},$$ for some constant $C$. If the domain has finite measure, then (1) is only required for sufficiently large $t$.
Now it $u_1,\ldots,u_n\in W^{1,n}$, then $u_i\in\exp L^{n'}$ for every $i$ (Trudinger's inequality). In view of the condition (1), with $A_i(t)=t^n$ and $A_j(t)=e^{t^{n'}}$ for $j\neq i$, the product rule yields that $$\nabla(u_1\cdots u_n)\in L^P$$ if $$t^{1/n}(\log t)^{1/n'}\cdots(\log t)^{1/n'}\leq cP^{-1}(t)$$ for large $t$ (if the domain has finite measure), where $(\log t)^{1/n'}$ appears ($n-1$)-times. Thus $P$ has to fulfill $$t^{1/n}(\log t)^{\frac{(n-1)^2}{n}}\leq cP^{-1}(t)$$ so the best possible choice of $P$ is $$P(t)=t^n(\log t)^{-(n-1)^2} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)$$ for large $t$. The divergence condition is only satisfied for $n=2$.
If $f\in W^{2,n}_{\rm loc}(\mathbb{R}^n,\mathbb{R}^n)$, then $J_f=\det Df\in W^{1,P}_{\rm loc}$, where $P$ is given by (2). | 2019-07-17 01:27:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9778403639793396, "perplexity": 88.90095903508521}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525004.24/warc/CC-MAIN-20190717001433-20190717023433-00165.warc.gz"} |
http://en.wikipedia.org/wiki/Branch_and_Bound | # Branch and bound
(Redirected from Branch and Bound)
Branch and bound (BB or B&B) is a general algorithm for finding optimal solutions of various optimization problems, especially in discrete and combinatorial optimization. A branch-and-bound algorithm consists of a systematic enumeration of all candidate solutions, where large subsets of fruitless candidates are discarded en masse, by using upper and lower estimated bounds of the quantity being optimized.
The method was first proposed by A. H. Land and A. G. Doig[1] in 1960 for discrete programming.
## General description
In order to facilitate a concrete description, we assume that the goal is to find the minimum value of a function $f(x)$, where $x$ ranges over some set $S$ of admissible or candidate solutions (the search space or feasible region). Note that one can find the maximum value of $f(x)$ by finding the minimum of $g(x) = -f(x)$. (For example, $S$ could be the set of all possible trip schedules for a bus fleet, and $f(x)$ could be the expected revenue for schedule $x$.)
A branch-and-bound procedure requires two tools. The first one is a splitting procedure that, given a set $S$ of candidates, returns two or more smaller sets $S_1, S_2, \ldots$ whose union covers $S$. Note that the minimum of $f(x)$ over $S$ is $\min\{v_1, v_2, \ldots\}$, where each $v_i$ is the minimum of $f(x)$ within $S_i$. This step is called branching, since its recursive application defines a tree structure (the search tree) whose nodes are the subsets of $S$.
The second tool is a procedure that computes upper and lower bounds for the minimum value of $f(x)$ within a given subset of $S$. This step is called bounding.
The key idea of the BB algorithm is: if the lower bound for some tree node (set of candidates) $A$ is greater than the upper bound for some other node $B$, then $A$ may be safely discarded from the search. This step is called pruning, and is usually implemented by maintaining a global variable $m$ (shared among all nodes of the tree) that records the minimum upper bound seen among all subregions examined so far. Any node whose lower bound is greater than $m$ can be discarded.
The recursion stops when the current candidate set $S$ is reduced to a single element, or when the upper bound for set $S$ matches the lower bound. Either way, any element of $S$ will be a minimum of the function within $S$.
When $\mathbf{x}$ is a vector of $\mathbb{R}^n$, branch and bound algorithms can be combined with interval analysis[2] and contractor techniques in order to provide guaranteed enclosures of the global minimum.[3][4]
## Applications
This approach is used for a number of NP-hard problems
Branch-and-bound may also be a base of various heuristics. For example, one may wish to stop branching when the gap between the upper and lower bounds becomes smaller than a certain threshold. This is used when the solution is "good enough for practical purposes" and can greatly reduce the computations required. This type of solution is particularly applicable when the cost function used is noisy or is the result of statistical estimates and so is not known precisely but rather only known to lie within a range of values with a specific probability. An example of its application here is in biology when performing cladistic analysis to evaluate evolutionary relationships between organisms, where the data sets are often impractically large without heuristics[citation needed]. | 2014-07-22 11:30:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 31, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.875542402267456, "perplexity": 235.46312452716634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997858581.26/warc/CC-MAIN-20140722025738-00154-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://imathworks.com/tex/tex-latex-last-visited-url-in-apa-style/ | # [Tex/LaTex] Last visited URL in apa style
apa-stylebibliographiesurls
I'm using the apa-good.bst file to typeset my bibliography in APA style. I need to include something like "Last visited…" (using the urldate variable) when I cite a webpage but I don't know how I can do this. Can anyone help me?
Your referenced bibliography style isn't available at CTAN. So I guess You are using the following file:
ucbthesis -- LaTeX template for typesetting UCB thesis -- apa-good.bst
The style support the following entries:
address author booktitle chapter edition
editor howpublished institution journal
key month note number organization pages
publisher school series title type url
volume year
In relation to my previous answer
URL of cited web site in bibliography
You can simple add to the field note:
note="Last visited..." | 2023-03-26 12:35:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5059475302696228, "perplexity": 10118.29231538858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00781.warc.gz"} |
http://www.physicsforums.com/showpost.php?p=3804951&postcount=3 | View Single Post
Ok thanks. I think I have understood how to find the selections rules. Could you explain why the vibrational modes only describes the exited modes and not the ground state. I guess it is like that because A1g is not contained in the vibrational modes of graphite (2 * E1u + 2 * E2g + 2 * A2u + 2 * B1g). I thinks it's is logical as you say that the ground state is fully symmetrical (guess you could think of it as all the atoms being at their equilibrium sites, maybe with some caution), but as i said I'm not quit sure why the ground state is not in the problem, let me explain: If I count the dimensions of the representations 2 * E1u + 2 * E2g + 2 * A2u + 2 * B1g i get 12, which I expected because there are 4 atoms in the unit cell each with three degrees of freedom. The representations tells me that I can devide these eigenfunction into sets that transform among each other, but the fully symmetric representation is not present, that is no of the eigenfunctions of the problem has the full symmetry of the problem, and hence is not the ground state. This is what i mean by it seems like the ground state has to be treated seperately. I guess somehow that the representation theory approach only describes exited modes, but i fail to see why that is? Hope that explains my problem a bit clearer. | 2013-05-24 16:26:14 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8195869326591492, "perplexity": 281.04108135755587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704752145/warc/CC-MAIN-20130516114552-00053-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://kluedo.ub.uni-kl.de/home/index/help | ## Help
### Miscellaneous
What is KLUEDO?
Welcome to this electronic document repository named KLUEDO. This server is maintained by the University Library of Kaiserslautern. You can find electronic fulltexts here. If you are the author of a doctoral thesis or habilitation or anything else interesting for the scientific community at this university and want to publish your work online, you can do so by clicking on "Publish".
Is the publication free of charge?
Yes! Basically all publications are free of charge at the moment.
Hints for searching
The search tipps are only available in german:
1. Personennamen sind wie folgt aufgebaut: Nachname, Vorname
2. Um nach mehreren aufeinanderfolgenden Begriffen in genau der vorgegebenen Reihenfolge zu suchen, sind diese mit doppelten Anführungszeichen zu umschließen.
3. Bei der Einfachen Suche werden mehrere Suchbegriffe automatisch mit und verknüpft. Für eine gezieltere Suche empfiehlt es sich mit Hilfe der Booleschen Operatoren mehrere Suchbegriffe miteinander zu verknüpfen. Die Operatoren müssen in GROSSBUCHSTABEN geschrieben werden.
AND: Alle eingegebenen Suchwörter müssen im Dokument vorkommen.
OR: Nur einer der eingegebenen Suchbegriffe muss im Dokument vorkommen. Diese Funktion erweitert den Suchbereich und bietet sich an, wenn unterschiedliche Bezeichnungen zu einem Thema existieren.
Orange OR Apfelsine
NOT: Diese Funktion hat eine ausschließende Wirkung und kann benutzt werden, um die Treffermenge einzuschränken: In Dokumenten, in denen der eine Suchbegriff vorkommt, darf der andere nicht vorkommen.
Orange NOT Apfelsine
Die Booleschen Operatoren funktionieren auch für die Erweiterte Suche. Dort können sie benutzt werden, um verschiedene Suchbegriffe innerhalb eines Suchfelds miteinander zu kombinieren. Zum Beispiel liefert die folgende Suchanfrage alle Dokumente des Autors "Mustermann" für die Jahre 2002 oder 2003:
Autor: Mustermann
Jahr: 2002 OR 2003
4. Einzelne unbekannte Buchstaben innerhalb eines Begriffes können durch den Platzhalter ? (Fragezeichen) ersetzt werden. Beispielsweise liefert die Anfrage Ma?er im Suchfeld Autor u. a. die verschiedenen Schreibweisen Maier, Mayer usw.
5. Mehrere unbekannte Buchstaben innerhalb eines Begriffes können durch den Platzhalter * (Stern) ersetzt werden. Beispielsweise liefert die Anfrage Ma*er im Suchfeld Person u. a. die verschiedenen Schreibweisen und Namen Maier, Mayer, Maurer usw.
Publication of doctoral thesises / habilitations
If you want to publish your doctoral thesis or habilitation on KLUEDO, please note the following.
• PDF-Document
To publish a doctoral thesis or habilitation only PDF-documents are accepted. Additional materials like graphics can be uploaded and published with the document.
• Privacy Note
It's usually prescribed in the promotion regulations that the dissertation must include a curriculum vitae. Documents published on KLUEDO are freely accessible worldwide, being indexed by search engines and are also being archived at the German National Library. We strongly recommend that you limit the vita in the electronic version of your dissertation on the scientific career. (See also: http://www.dissonline.de/recht/datenschutz.htm). In particular, the specification of sensitive personal data (eg date of birth, marital status, etc.) should be avoided.
• Required Fields on the cover sheet
Yout can look at the cover sheet of the dissertations already published on KLUEDO. They meet the requirements for release on KLUEDO an have the following mandatory information on the cover sheet:
• Identifier of the University Library (D 386)
• Date of disputation
• At least two reviewer
• Specification for approval by the relevant department
I have some more questions - whom shall I contact?
#### Questions related to KLUEDO project
Gisela Weber (project management)
phone: 0631 / 205-2399
fax: 0631 / 205-2355
Questions to the team and the project management of KLUEDO:
e-mail: kluedo@ub.uni-kl.de
#### Questions related to the technology
Sven Heitmann (technology)
phone: 0631 / 205-2813
Representation:
Michael Neufing (technology)
phone: 0631 / 205-2269
#### Questions related to the delivery of the printed version of a dissertation
Kathrin Engelkamp-Kutas
phone: 0631 / 205-2369
phone: 0631 / 205-3190
e-mail: engelkamp@ub.uni-kl.de
Universitätsbibliothek Kaiserslautern
- Geschenk- und Tauschstelle -
Postfach 2040
67608 Kaiserslautern
#### Questions related to promotion regulations
The responsible department (deanery).
What do I need to publish here?
Please provide the following materials to publish something here:
• your document(s), which you want to publish
... and, well, of course you need a little bit of time to fill the publishing form.
Please ensure before publication, that you are authorized to do that. Basically you are authorized if you are are the author or creator yourself and have not given exclusive rights to a third person or company. If you published parallelly at a professional publisher or plan to do that, please ensure, that the publisher allows you to make parallel publications on the universities electronic document repository.
What's a presentation format?
Presentation format means the file, that the user can watch. The different format is the original format, which serves as data basis for the presentation format. HTML and XML are well-known formats for presentation on the internet, for electronic documents (like the ones on this repository server) the most common presentation format is PDF.
What are the requirements for my document(s)?
• Accepted file-formats
Your document must be a scientific document, readable with a prevalent document reader program. Currently we are accepting the following formats:
PDF (text documents)
BMP, JPG/JPEG, PNG, TIF/TIFF (image files)
M4A, MP3, WAV (audio files)
LPD, M4V, MPG/MPEG, RM, RV (video files)
• No copy restrictions and other DRM
The document must not be copy restricted or access restricted in another way. This is a requirement by the German National Library to ensure long term archivation of the document.
Are there restrictions on file size?
The system accepts only uploading files up to 100MB. If you want to publish larger files, please skip the file upload and carry the files on CD/DVD to Tauschstelle of univerity library or send them per mail.
What do I have to consider writing my text?
You can find help to create scientific texts in a lot of sources. General hints, which formats are usable to publish online, are offered by DissOnline.
The University Library of Freiburg offers a tutorial on publishing in PDF format (in german language only).
If you want to write your work with TeX or LaTeX (this may be very useful on engineering topics with many mathematical formulas!), so you can find some interesting books about that in the OPAC of the UB.
Further information and helpful format exhibitions for doctoral thesises, State doctorates and Bachelor-, Master-, Magister- and Diploma-thesises are offered by the Humboldt University of Berlin on their electronic document repository edoc.
How do I publish my work here?
Please click on the publication link on the start page.
First you are requested to choose a document type. Right beneath that you can upload your document files. Read the legal information and guidance and confirm it by activating a checkbox. Having done so you may go to the next step. What follows now is the actual form. You have to fill in data about your publication here (so called metadata), which is used to describe your work in catalouges and other bibliographic directories. Depending on the document type, some of these fields are mandatory and therefore have to be filled in.
Mandatory fields are for example:
• the title of your publication and the language of the title
• the abstract of the document
• the publication date of your document (normally the day you publish it online)
• the language of the document (must be choosen from a list)
If you are unsure what to fill in certain form fields, you can point the cursor on a field and an explaining help text will be shown.
After you have finished the form, all data will be displayed once again for a check-up and you then have two possibilities: you can correct them if necessary or simply safe it directly.
Note:
Are there any special things to know about publishing a preprint?
Preprints are papers, which are not yet published by a professional publisher, but such a publication is planned and the publisher already accepted printing the work. Depending on the publisher there are several terms to be regarded by preprinting it. Many publishers require a link on our document repository, which references the final version on the publishers page. Please tell us when your preprint is published, so that we can accomplish the requirements of the publisher!
Special procedures for publication in a series / collection
Optionally, you can assign a document to a series or collection. Both options can be selected directly in the publication process.
If a document should be assigned to a series (counted) a band name is required. This name may not be chosen freely, but is given by the respective faculty. At each department deanery you can get the contact information of the person who is responsible for the allocation of the band names.
To assign a document to a collection (uncounted) no band name is required.
Retention period for dissertations with pending patent application
If your dissertation is related to a patent application, you may request that the printed copies and the electronic version of your thesis will be published by the Univerity Library after a retention period of one year.
If this is the case, please use the normal publication process of KLUEDO, but do not upload a fulltext and fill in only the metadata of your dissertation. Use the field "Note" in the section "room for notices" to write down a hint, that your publication should be delayed because of a pending patent application. After completion of the publication process you give the electonic version of your dissertation on a CD or DVD along with your printed copies to the Tauschstelle. In addition please submit the completed and signed form for a retention period (the form is only available in german language) at the same place.
For further questions you can contact the Tauschstelle.
Available document types
• (Scientific) Article
Document type (scientific) Article includes documents that have been published as article, editorial, register, table of contents or editorial section of a scientific journal or scientific periodical (postprint).
• Bachelor’s Thesis
Document type Bachelor's Thesis refers to the lowest level of a written thesis (usually after 3 years of study).
• Book
Document type Book (Monograph) is intended for classic monographic publications.
• Conference Object
Document type Conference Object includes all kinds of documents connected to a conference (conference papers, conference reports, conference lectures, contributions to conference proceedings, conference contributions, abstracts, volumes of conference contributions, conference posters).
• Contribution to a (non-scientific) Periodical
Document type Contribution to (non-scientific) Periodical refers to contributions in newspapers, weekly magazines or other non-scientific periodicals.
• Course Material
Document type Course Material refers to teaching material in the broadest sense, e.g. lecture recordings as video or audio files, exercise material, preparation or exam material. Lecture texts as such, however, are represented by document type Lecture.
• Doctoral Thesis
Document type Doctoral Thesis refers to a scientific paper leading to a doctoral degree.
• Habilitation
Document type Habilitation refers to a scientific work in Habiliation to acquire a teaching license.
• Image
Document type Image refers to a non-textual visual representation. Examples are pictures of photographs of objects, paintings, prints, drawings, other images and graphics, animations and moving images, films, diagrams, maps or sheet music. This document type can be used for digital and physical objects.
• Lecture
Document type Lecture includes university speeches, lectures and inaugural lectures.
• Master’s Thesis
Document type Master's Thesis refers to the medium level of a written thesis and also includes written theses completed before the Bologna process for academic degrees equivalent to the current master degree (‘Magister‘, ‘Uni-Diplom‘, ‘Staatsexamen‘).
• Misc
Document type Misc is intended for everything that does not fit in any of the existing document types.
• Moving Image
Document type Moving Image refers to a series of visual representations that convey the impression of movement when shown sequentially. Examples are animations, films, TV shows, videos, zoetropes or the visual representation of a simulation.
• Part of a Book
Document type Part of a Book (Chapter) represents documents that have been prepared within the framework of a monographic publication, such as chapters or contributions to compilations.
• Periodical
Document type Periodical includes magazines or periodicals, with the metadata related to the magazine or periodical as a whole.
• Periodical Part
Document type Periodical Part represents documents that have been prepared within the framework of a periodical publication.
• Preprint
Document type Preprint includes preliminary scientific or technical papers that are not published in a series of an institution, but are to appear in a scientific journal or as part of a book.
• Report
Document type Report includes textual material that cannot be categorized as any of the other types, e.g. reports, external research reports, internal reports, memos, statistical reports, project completion reports, technical documentations and instructions.
• Review
Document type Review refers to reviews of books or article and/or summaries of a publication that have not been written by the author.
• Sound
Document type Sound refers to a resource whose primary aim is to be heard, e.g. music files, audio CDs, speech and sound recordings. No differentiation is made between sounds, noise and music.
• Study Thesis
Document type Study Thesis refers to textual elaborations that are prepared as part of a course of study (term papers, seminar reports, investigation and project reports) and are not categorized as thesis.
• Working Paper
Document type Working Paper refers to a preliminary scientific or technical paper that is published in a series of an institution (also: Research Paper, Research Memorandum, Discussion Paper).
The description of the types of documents were largely taken from the documentation of OPUS.
(Source: OPUS 4 Manual, Version 1.4 (21.02.2011), S. 62-64)
We will verify your documents on functionality and formal issues. If we need more data or we have questions we will get in contact with you. If the data is valid, we will include the document in KLUEDO. Only then it will be visible in KLUEDO. If no further queries are necessary, the processing time usually takes not longer than two business days.
If your document is a dissertation, you will also receive a written confirmation for online publication. We will send the written conformation for online-publication together with the written confirmation for delivery of printed copies by internal mail directly to your appropriate deanery when the verification-process is finished. Please upload dissertations betimes to ensure processing within the prescribed time limits.
Metadata can be defined as
• data describing one or more ressources
• or as
• data associated with an object and describing it
Basically metadata is describing documents, objects or services and contains information about their content, structure or form. More abstractly metadata is a description of data or "data about data". Bibliographic data sets and catalog entries in library catalogs can be seen as a kind of metadata.
This repository is using metadata in the Dublin Core Metedata Element Set (short Dublin Core (DC)) which has fifteen basic elements. Dublin Core is the result of international efforts to reach a collective consensus in describing electronic objects (in the broadest sense). The Library of Congress (LoC), the Online Cataloging Library Center (OCLC) and several national libraries are dealing with Dublin Core in many projects and are close to introduce the system respectively.
Basic help about the publication form
Describe the document you want to upload using the categories and fields on our publication form. The marked elements (sign: *) are mandatory (you have to type something there). Please describe your document as clear as possible.
If you need help with some certain elements, move the mouse pointer over the field name.
Thank you for supporting us.
If you need german umlauts and cannot find them on your keyboard, here they are to copy and paste:
ä ö ü Ä Ö Ü ß
Form element Document Type
This element is mandatory for every publication! It contains the type of publication, for example thesis, preprint, study paper etc. You can select a type from a list.
Use of formulas in abstracts
KLUEDO uses the opensource-project MathJax to display formulas in abstracts. For input of formulas use the syntax of LaTeX and escape them with $ and $, otherwise the formula will not be displayed correctly. The formula will be displayed in a seperate paragraph. If formulas should be displayed inline with the normal text, please use $$ and $$ to escape the formulas.
Some information about generating formulas in the syntax of LaTeX shows the LaTeX-Kochbuch (only available in german language).
Students and employees of Kaiserslautern University of Technology also have access to the video tutorial "LaTeX" at video2brain.de (only available in german language).
You have disabled JavaScript in your browser. It is not possible to use MathJax for displaying formulas without the use of JavaScript. The formulas will be shown in the syntax of LaTeX.
You can test the formula output here. Change the text in the input field and then click "generate formula-output".
test environment for formula representation on KLUEDO Formula-input (text to input in the publish form):
This input generates the following output
Formula-output (representation of the formula in the published document):
You can display formulas like $$c^2 = a^2 + b^2$$ inline with the normal text or multiline formulas in a seperate paragraph: $F_\alpha(x) = \sum_{n=0}^\infty \frac{(2n-1)^n} {n \Phi(n + 2\alpha - 1)}{\left({\frac{2x}{3}}\right)}^{2(n + 1) + \alpha}$ Your abstract will then be continued below the formula.
In most cases this is no problem. Please ensure in any case, if your publisher allows a parallel publication! To do that you can use the SHERPA/ROMEO list.
For this repository the multiple publication on different webservers is no problem. But you have to verify, if your publisher authorizes parallel publishing. Some publishers have special requirements like setting a link to the publishers fulltext on the open access server.
You can find information about the publisher's handling of online-publications in the Sherpa/Romeo-list.
Please tell us in any case if your work is published by a third person or publisher and that the publication on this repository is a parallel publication. Then we will add a link and/or a reference to the (printed or anywhere else published) ressource.
8 facts you should know about open access
The information about open access is only available in german:
1. Alle fachrelevanten Open Access-Zeitschriften finden Sie im DOAJ - Directory of Open Access Journals.
2. Sie können nicht nur in Open Access-Zeitschriften, sondern auch in Open Access-Archiven und -Repositorien publizieren. Diese Archive, darunter auch der Kaiserslauterner Dokumentenserver KLUEDO, sind gelistet im ROAR - Registry of Open Access Repositories und im DOAR - Directory of Open Access Repositories.
3. Das Einspielen eines Artikels in ein solches Archiv dauert in der Regel nur einige wenige Minuten, testen Sie dies auf dem Kaiserslauterner Dokumentenserver KLUEDO.
4. Die meisten Verlage gestatten ihren Autoren inzwischen paralleles Open Access-Publizieren auf Hochschulservern. Prüfen Sie, ob auch Ihr Verlag diese Form des "self archiving" erlaubt. Informationen darüber, wie die Verlage zu einer parallelen Online-Publikation stehen, finden sich im Sherpa/Romeo-Verzeichnis.
5. Auch wenn der Autor bereits das ausschließliche Nutzungsrecht an einen Verlag abgegeben hat, kann er unter bestimmten Voraussetzungen von seinem Zweitveröffentlichungsrecht (Urheberrechtsgesetz § 38 (4)) Gebrauch machen. Weitere Informationen zum Urheberecht und insbesondere zum Zweitveröffentlichungsrecht stellt die Schwerpunktinitiative „Digitale Information“ bereit.
6. Immer weniger Verlage lehnen einen Artikel zur Veröffentlichung ab ("Ingelfinger Rule"), nur weil dieser bereits auf einem Hochschulserver publiziert wurde.
7. Open Access publizierte Artikel erreichen ein größeres Publikum als in teueren Fachzeitschriften publizierte Artikel und steigern damit den begehrten Impact Factor Ihrer wissenschaftlichen Arbeit.
8. Open Access schützt Urheberrecht, denn wer seine Arbeit frühzeitig Open Access publiziert, dokumentiert wesentlich rascher seine geistige Urheberschaft als dies im Falle herkömmlicher Publikationsprozesse möglich ist, die sich oft monate-, wenn nicht jahrelang hinziehen.
Setzen auch Sie das Open Access-Prinzip um und publizieren Sie (parallel) auf dem Dokumentenserver Ihrer Hochschule - in KLUEDO!
OPUS 4
This document server is based on the repository software OPUS 4.4. OPUS is documented here: http://opus4.kobv.de/.
Disclaimer
The information published here is collected with care, but does not guarantee to be current, absolutely correct or complete. All services free of charge are non-binding. The maintainer reserves the right to change, add or terminate certain services or parts of them without explicit notification. The maintainer of this service is not liable for contents of foreign pages, which are accessible via a hyperlink. The hyperlinks used in this service are collected with reasonable care. The maintainer cannot influence the current or future contents for foreign pages. So the maintainer is not liable for the contents of those pages and does not usurp it. Only the maintainer of the external pages is liable for illegal, deficient or non-complete contents as for damages, which may occur by using or not-using his information. The liability of the person who makes a hyperlink to that page is excluded. The copyright for the documents on this server always remain with the authors. The University Library endevours to use self-made or licence-free texts and graphics designing their web services. All trademarks and material copyrighted by third parties mentioned in our texts are under the laws and rules of the corresponding current property rights of the owner.
$Rev: 13581$ | 2015-07-01 23:23:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2442486733198166, "perplexity": 8058.559070746149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095273.5/warc/CC-MAIN-20150627031815-00161-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/keplers-3rd-law.232808/ | # Kepler's 3rd Law
1. May 2, 2008
### Reverie29
1. The problem statement, all variables and given/known data
A satellite is in a circular orbit very close to the surface of a spherical planet. The period of the orbit is 2.49 hours.
What is density of the planet? Assume that the planet has a uniform density.
2. Relevant equations
T^2 = (4(pi)^2*r^3) / GM
3. The attempt at a solution
Okay, so I coverted the period into seconds and got 8964 seconds.
Then I rearranged the equation to get
M/r^3 = 4(pi)^2 / GT^2, assuming that M/r^3 would get me density.
So then according to that Density = 4(pi)^2 / (6.67e-11 N*m^2/kg^2)(8964 s)^2 which gives 7366 kg/m^3 which is not correct.
Or am I missing something about density? Density is mass divided by area. Should I be finding a radius to find the area and then find the mass somehow... I don't know. I'm confused on what to do.
2. May 2, 2008
### rock.freak667
$$\rho = \frac{M}{V}$$
Assuming the planet is a perfect sphere, $V=\frac{4}{3} \pi r^3$
So
$$\rho = \frac{M}{\frac{4}{3} \pi r^3} = \frac{3M}{4\pi r^3}$$
3. May 2, 2008
### Janus
Staff Emeritus
Your problem is in assuming that M/r^2 gives you density.
Density is mass divided by volume. So what is the formula for the volume of a sphere?
4. May 2, 2008
### Reverie29
Okay.
The density of a sphere is = 3M / 4(pi)r^3. And I have already solved for M/r^3. I tried multiplying by 3 and dividing by 4pi, but still got an incorrect answer. I got 17,356 kg/m^3.
Should I be looking at another equation?
5. May 2, 2008
6. May 2, 2008
### rock.freak667
$$T^2=\frac{4\pi r^3}{GM}$$
$$\frac{1}{T^2}=\frac{GM}{4\pi r^3}$$
$$\frac{1}{T^2}=\frac{G}{3} \frac{3M}{4\pi r^3}$$
$$\frac{1}{T^2}=\frac{G}{3} \rho$$
and then you got $\rho$ to be that value? If so and you calculated correctly...that should be the answer.
7. May 2, 2008
### Reverie29
I have no idea why it increased! I guess I must be calculator retarded. I've got it now, thanks!!! | 2017-08-23 16:20:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6727854013442993, "perplexity": 1734.824842910477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120573.75/warc/CC-MAIN-20170823152006-20170823172006-00095.warc.gz"} |
https://in.mathworks.com/help/wavelet/ref/imodwt.html | # imodwt
Inverse maximal overlap discrete wavelet transform
## Syntax
``xrec = imodwt(w)``
``xrec = imodwt(w,wname)``
``xrec = imodwt(w,Lo,Hi)``
``xrec = imodwt(___,lev)``
``xrec = imodwt(___,'reflection')``
## Description
example
````xrec = imodwt(w)` reconstructs the signal based on the maximal overlap discrete wavelet transform (MODWT) coefficients in `w`. By default, `imodwt` assumes that you obtained `w` using the `'sym4'` wavelet with periodic boundary handling. If you do not modify the coefficients, `xrec` is a perfect reconstruction of the signal.```
example
````xrec = imodwt(w,wname)` reconstructs the signal using the orthogonal wavelet `wname`. `wname` must be the same wavelet used to analyze the signal input to `modwt`.```
example
````xrec = imodwt(w,Lo,Hi)` reconstructs the signal using the orthogonal scaling filter `Lo` and the wavelet filter `Hi`. The `Lo` and `Hi` filters must be the same filters used to analyze the signal input to `modwt`.```
example
````xrec = imodwt(___,lev)` reconstructs the signal up to level `lev`. `xrec` is a projection onto the scaling space at level `lev`. The default level is 0, which results in perfect reconstruction if you do not modify the coefficients.```
example
````xrec = imodwt(___,'reflection')` uses the reflection boundary condition in the reconstruction. If you specify `'reflection'`, `imodwt` assumes that the length of the original signal length is one half the number of columns in the input coefficient matrix. By default, ` imodwt` assumes periodic signal extension at the boundary.You must enter the entire character vector `'reflection'`. If you added a wavelet named `'reflection'` using the wavelet manager, you must rename that wavelet prior to using this option. `'reflection'` may be placed in any position in the input argument list after `x`.```
## Examples
collapse all
Obtain the MODWT of an ECG signal and demonstrate perfect reconstruction.
Load the ECG signal data and obtain the MODWT.
`load wecg;`
Obtain the MODWT and the Inverse MODWT.
```w = modwt(wecg); xrec = imodwt(w);```
Use the L-infinity norm to show that the difference between the original signal and the reconstruction is extremely small. The largest absolute difference between the original signal and the reconstruction is on the order of $1{0}^{-12}$, which demonstrates perfect reconstruction.
`norm(abs(xrec'-wecg),Inf)`
```ans = 2.3255e-12 ```
Obtain the MODWT of Deutsche Mark-U.S. Dollar exchange rate data and demonstrate perfect reconstruction.
Load the Deutsche Mark-U.S. Dollar exchange rate data.
`load DM_USD;`
Obtain the MODWT and the Inverse MODWT using the `'db2'` wavelet.
```wdm = modwt(DM_USD,'db2'); xrec = imodwt(wdm,'db2');```
Use the L-infinity norm to show that the difference between the original signal and the reconstruction is extremely small. The largest absolute difference between the original signal and the reconstruction is on the order of $1{0}^{-13}$, which demonstrates perfect reconstruction.
`norm(abs(xrec'-DM_USD),Inf)`
```ans = 1.6370e-13 ```
Obtain the MODWT of an ECG signal using the Fejér-Korovkin filters.
`load wecg`
Create the 8-coefficient Fejér-Korovkin filters. Use the filters to obtain the MODWT of the ECG data.
```[~,~,Lo,Hi] = wfilters("fk8"); wtecg = modwt(wecg,Lo,Hi);```
Obtain the inverse MODWT using the filters.
`xrec = imodwt(wtecg,Lo,Hi);`
Obtain a second inverse MODWT using the wavelet name. Confirm both inverse transforms are equal.
```xrec2 = imodwt(wtecg,"fk8"); max(abs(xrec-xrec2))```
```ans = 0 ```
Plot the original data and one of the reconstructions.
```subplot(2,1,1) plot(wecg) title("ECG Signal") subplot(2,1,2) plot(xrec) title("Reconstruction")```
Obtain the MODWT of an ECG signal down to the maximum level and obtain the projection of the ECG signal onto the scaling space at level 3.
`load wecg;`
Obtain the MODWT.
`wtecg = modwt(wecg);`
Obtain the projection of the ECG signal onto ${V}_{3}$, the scaling space at level three by using the `imodwt` function.
`v3proj = imodwt(wtecg,3);`
Plot the original signal and the projection.
```subplot(2,1,1) plot(wecg) title('Original Signal') subplot(2,1,2) plot(v3proj) title('Projection onto V3')```
Note that the spikes characteristic of the R waves in the ECG are missing in the ${V}_{3}$ approximation. You can see the missing details by examining the wavelet coefficients at level three.
Plot the level-three wavelet coefficients.
```figure plot(wtecg(3,:)) title('Level-Three Wavelet Coefficients')```
Obtain the inverse MODWT using reflection boundary handling for Southern Oscillation Index data. The sampling period is one day. `imodwt` with the `'reflection'` option assumes that the input matrix, which is the `modwt` output, is twice the length of the original signal length. `imodwt` reflection boundary handling reduces the number of wavelet and scaling coefficients at each scale by half.
```load soi; wsoi = modwt(soi,4,'reflection'); xrecsoi = imodwt(wsoi,'reflection');```
Use the L-infinity norm to show that the difference between the original signal and the reconstruction is extremely small. The largest absolute difference between the original signal and the reconstruction is on the order of $1{0}^{-11}$, which demonstrates perfect reconstruction.
`norm(abs(xrecsoi'-soi),Inf)`
```ans = 1.6421e-11 ```
Load the 23 channel EEG data `Espiga3` [2]. The channels are arranged column-wise. The data is sampled at 200 Hz.
`load Espiga3`
Obtain the maximal overlap discrete wavelet transform down to the maximum level.
`w = modwt(Espiga3);`
Reconstruct the multichannel signal. Plot the original data and reconstruction.
```xrec = imodwt(w); subplot(2,1,1) plot(Espiga3) title('Original Data') subplot(2,1,2) plot(xrec) title('Reconstruction')```
## Input Arguments
collapse all
MODWT transform of a signal or multisignal down to level L, specified as a matrix or 3-D array, respectively. `w` is an L+1-by-N matrix for the MODWT of an N-point signal, and an L+1-by-N-by-NC array for the MODWT of an N-by-NC multisignal. By default, `imodwt` assumes that you obtained the MODWT using the `'sym4'` wavelet with periodic boundary handling.
Data Types: `single` | `double`
Complex Number Support: Yes
Synthesis wavelet, specified as a character vector or string scalar. The wavelet must be orthogonal. Orthogonal wavelets are designated as type 1 wavelets in the wavelet manager, `wavemngr`.
Valid built-in orthogonal wavelet families are: Best-localized Daubechies (`"bl"`), Beylkin (`"beyl"`), Coiflets (`"coif"`), Daubechies (`"db"`), Fejér-Korovkin (`"fk"`), Haar (`"haar"`), Han linear-phase moments (`"han"`), Morris minimum-bandwidth (`"mb"`), Symlets (`"sym"`), and Vaidyanathan (`"vaid"`).
For a list of wavelets in each family, see `wfilters`. You can also use `waveinfo` with the wavelet family short name. For example, `waveinfo("db")`. Use `wavemngr("type",wn)` to determine if the wavelet wn is orthogonal (returns 1). For example, `wavemngr("type","db6")` returns 1.
The synthesis wavelet must be the same wavelet used in the analysis with `modwt`.
Filters, specified as a pair of even-length real-valued vectors. `Lo` is the scaling filter, and `Hi` is the wavelet filter. `Lo` and `Hi` must be the same filters used in the analysis with `modwt`. The filters must satisfy the conditions for an orthogonal wavelet. The lengths of `Lo` and `Hi` must be equal. See `wfilters` for additional information. You cannot specify both `wname` and a filter pair `Lo,Hi`.
Note
To agree with the usual convention in the implementation of `modwt` in numerical packages, the roles of the analysis and synthesis filters returned by `wfilters` are reversed in `imodwt`. See Inverse MODWT with Specified Filters.
Data Types: `single` | `double`
Reconstruction level, specified as a nonnegative integer between 0 and `size(w,1)-2`. The level must be less than the level used to obtain `w` from `modwt`. If `lev` is 0 and you do not modify the coefficients, `imodwt` produces a perfect reconstruction of the signal.
## Output Arguments
collapse all
Reconstructed version of the original signal or multisignal based on the MODWT and the level of reconstruction, returned as a vector or matrix.
## References
[1] Percival, Donald B., and Andrew T. Walden. Wavelet Methods for Time Series Analysis. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge ; New York: Cambridge University Press, 2000.
[2] Mesa, Hector. “Adapted Wavelets for Pattern Detection.” In Progress in Pattern Recognition, Image Analysis and Applications, edited by Alberto Sanfeliu and Manuel Lazo Cortés, 3773:933–44. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. https://doi.org/10.1007/11578079_96.
## Version History
Introduced in R2015b | 2022-11-28 21:45:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6942036151885986, "perplexity": 2181.993076441621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710662.60/warc/CC-MAIN-20221128203656-20221128233656-00759.warc.gz"} |
http://gradestack.com/Review-CFA-Level-1-CFA/Time-Value-of-Money/IRR/14402-2904-975-study-wtw | IRR
Discount rate that makes NPV of all cash flows equal to zero.
For mutually exclusive projects, NPV and IRR can give conflicting rankings. NPV is a better measure in such cases.
Â
Â
Q: If I have to invest today $2,000 for a project which gives me$100 next year, $200 the next, and$250 after that till perpetuity, should I make this investment?
Cost of Capital = 10%. | 2017-05-26 21:17:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3144790828227997, "perplexity": 5529.378977914971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608684.93/warc/CC-MAIN-20170526203511-20170526223511-00199.warc.gz"} |
https://www.physicsforums.com/threads/3-dimensional-parametric-equations.167026/ | # 3-dimensional parametric equations
1. Apr 22, 2007
### JolleJ
3-dimensional parametric equations [Updated]
Look lower for update....
1. The problem statement, all variables and given/known data
Well, my problem is that I need to give some examples on 3-dimensional parametric equations. So far I've found out what parametric equations are, and more specifically what 3-dimensional parametric equations are. But now I am being asked to give some real-world examples of these.
2. Relevant equations
A 3-dimensional parametric equations is an equation of something in a 3d-coordinate system, where each coordinate x,y,z are expressed by the same parameter t: x(t) = f1(t) ^ y(t) = f2(t) ^ z(t) = f3(t)
3. The attempt at a solution
Well, so far I've found out that Solar Winds, Aurorae and the movement of the plasma inside a Tokamak are all 3-dimensional parametric equations. My problem is that while I know that the movements can be expressed by 3-dimensional parametric equations, I have absolutely no idea how these equations look like. I've searched all around the Internet, but I can't find any equations for this - or anything at all that looks like it.
I hope you can help.
Update:
I have now advanced a bit, and acutally found a simulation of the particles moving inside a tokamak, which shows that the particles drift up or down depending on their charge q. So now I have a new problem:
1. The problem statement, all variables and given/known data
My problem is now that I do understand the mathematics / physic equations used in the simulations.
The simulations starts with introducing all the varibles and functions:
Code (Text):
B0:=1
v,m:=1,.01
x,y,z:=3,0,0
vx,vy,vz:=v,v*q,0
t,dt:=0,.01
Integratemethod:=RK4
func det(a,b,c,d)
return a*d - b*c
endfunc
func R(x,y)
return (x^2+y^2)
endfunc
func acc(va,vb,ba,bb)
return (va*bb-vb*ba)/m
endfunc
func Bx(x,y,z)
return y*B0/R(x,y)
endfunc
func By(x,y,z)
return -x*B0/R(x,y)
endfunc
func Bz(x,y,z)
return 0
endfunc
Model tokamak
x':=vx
y':=vy
z':=vz
vx':=q*det(vy,vz,By(x,y,z),Bz(x,y,z))/m
vy':=q*det(vz,vx,Bz(x,y,z),Bx(x,y,z))/m
vz':=q*det(vx,vy,Bx(x,y,z),By(x,y,z))/m
endmodel
After this, it makes a loop which constantly calculates the integrated function of "tokamak" (why this?). And after this adding the timedifference dt to the time variable t:
Loop:
Code (Text):
integrate tokamak(t,dt)
t:=t+dt
2. Relevant equations
I can see the that function det, is finding the determinant, though I do not know why this is relevant.
All of it is something with vectors, but I am not sure how.
3. The attempt at a solution
Tried looking at it so long, but I am not good enough at vectors and integration yet, so I simply cannot see excacly what is going on.
I really hope that one of you can open my eyes.
Last edited: Apr 23, 2007
2. Apr 22, 2007
### Mindscrape
Movement inside a Tokamak will be a complicated example of a 3-dimensional parametric equation. Basically any 3-D motion can be parametrized. A particle moving in a straight line, for example, would follow a motion of $$f(t) = at\mathbf{i} + bt\mathbf{j} + ct\mathbf{k}$$ where a, b, and c are constants and i, j, and k are x, y, and z coordinate directions, respectively. The familiar example of projectile motion could be described as $$f(t) = at\mathbf{i} + bt\mathbf{j} - gt^2 \mathbf{k}$$.
Other, more complicated examples, could be a helix $$s(t) = Rcos(t)\mathbf{i} + Rsin(t)\mathbf{j} + ct\mathbf{k}$$.
If you know Calc 1, you could find a Calc 3 book that will have some good examples of 3-D parametric equations.
3. Apr 23, 2007
### JolleJ
Thank very, very much. In reality I would some advanced examples of 3d parametric equations, like the Tokamak. Any chance that I can find some equations for it somewhere?
When you say Calc 3, do you mean Calculus 3?
4. Apr 23, 2007
### JolleJ
Updated my question...
5. Apr 23, 2007
:rofl:
What do you think, of course he means calc 3. :rofl: I mean, calculus 3. What country are you in? here in the states schools break down Calculus into 3 parts, 1, 2 , 3. 3 is vector Calculus.
6. Apr 23, 2007
### JolleJ
Well I'm from from Denmark:tongue2: And here, we certainly do not split the subjects up like that...:uhh: But now I know. Thanks
7. Apr 23, 2007
### Mindscrape
You don't know what a 3-D parametric equation is but you understand advanced examples? In the United States, we split Calculus into derivatives, integrals and series, and multivariate calculus.
If you want some more "advanced" examples, you might try looking in a book on Electricity and Magnetism.
8. Apr 24, 2007
### JolleJ
I know what 3-d parametric equations are, but having a hard time finding some good examples.
Anyway, I'm still trying to crack trhough the code...
9. Apr 24, 2007
### Mindscrape
Is that a pseudocode or a specific program code? As far as I can tell, it is a code applied to a specific example, with conditions that are predetermined (uniform magnetic field perpindicular to a plane, a given B-field magnitude, and other such things).
The loop at the end probably starts at t=0 (right?) and numerically integrates the functions up top (declared under tokamak) for each small incremental t (known as dt, and probably also defined elsewhere), then loops through all the way until a certain time t=t_final. This will give an approximate function of position, since it will give points of position along each incremental dt. | 2017-04-28 06:29:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7079710364341736, "perplexity": 1105.8401433516124}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122865.36/warc/CC-MAIN-20170423031202-00523-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://eventuallyalmosteverywhere.wordpress.com/tag/cramers-theorem/ | # Random walks conditioned to stay positive
In this post, I’m going to discuss some of the literature concerning the question of conditioning a simple random walk to lie above a line with fixed gradient. A special case of this situation is conditioning to stay non-negative. Some notation first. Let $(S_n)_{n\ge 0}$ be a random walk with IID increments, with distribution X. Take $\mu$ to be the expectation of these increments, and we’ll assume that the variance $\sigma^2$ is finite, though at times we may need to enforce slightly stronger regularity conditions.
(Although simple symmetric random walk is a good example for asymptotic heuristics, in general we also assume that if the increments are discrete they don’t have parity-based support, or any other arithmetic property that prevents local limit theorems holding.)
We will investigate the probability that $S_n\ge 0$ for n=0,1,…,N, particularly for large N. For ease of notation we write $T=\inf\{n\ge 0\,:\, S_n<0\}$ for the hitting time of the negative half-plane. Thus we are interested in $S_n$ conditioned on T>N, or T=N, mindful that these might not be the same. We will also discuss briefly to what extent we can condition on $T=\infty$.
In the first paragraph, I said that this is a special case of conditioning SRW to lie above a line with fixed gradient. Fortunately, all the content of the general case is contained in the special case. We can repose the question of $S_n$ conditioned to stay above $n\alpha$ until step N by the question of $S_n-n\alpha$ (which, naturally, has drift $\mu-\alpha$) conditioned to stay non-negative until step N, by a direct coupling.
Applications
Simple random walk is a perfectly interesting object to study in its own right, and this is a perfectly natural question to ask about it. But lots of probabilistic models can be studied via naturally embedded SRWs, and it’s worth pointing out a couple of applications to other probabilistic settings (one of which is the reason I was investigating this literature).
In many circumstances, we can desribe random trees and random graphs by an embedded random walk, such as an exploration process, as described in several posts during my PhD, such as here and here. The exploration process of a Galton-Watson branching tree is a particularly good example, since the exploration process really is simple random walk, unlike in, for example, the Erdos-Renyi random graph G(N,p), where the increments are only approximately IID. In this setting, the increments are given by the offspring distribution minus one, and the hitting time of -1 is the total population size of the branching process. So if the expectation of the offspring distribution is at most 1, then the event that the size of the tree is large is an atypical event, corresponding to delayed extinction. Whereas if the expectation is greater than one, then it is an event with limiting positive probability. Indeed, with positive probability the exploration process never hits -1, corresponding to survival of the branching tree. There are plenty of interesting questions about the structure of a branching process tree conditional on having atypically large size, including the spine decomposition of Kesten [KS], but the methods described in this post can be used to quantify the probability, or at least the scale of the probability of this atypical event.
In my current research, I’m studying a random walk embedded in a construction of the infinite-volume DGFF pinned at zero, as introduced by Biskup and Louidor [BL]. The random walk controls the gross behaviour of the field on annuli with dyadically-growing radii. Anyway, in this setting the random walk has Gaussian increments. (In fact, there is a complication because the increments aren’t exactly IID, but that’s definitely not a problem at this level of exposition.) The overall field is decomposed as a sum of the random walk, plus independent DGFFs with Dirichlet boundary conditions on each of the annuli, plus asymptotically negligible corrections from a ‘binding field’. Conditioning that this pinned field be non-negative up to the Kth annulus corresponds to conditioning the random walk to stay above the magnitude of the minimum of each successive annular DGFF. (These minima are random, but tightly concentrated around their expectations.)
Conditioning on $\{T > N\}$
When we condition on $\{T>N\}$, obviously the resulting distribution (of the process) is a mixture of the distributions we obtain by conditioning on each of $\{T=N+1\}, \{T=N+2\},\ldots$. Shortly, we’ll condition on $\{T=N\}$ itself, but first it’s worth establishing how to relate the two options. That is, conditional on $\{T>N\}$, what is the distribution of T?
Firstly, when $\mu>0$, this event always has positive probability, since $\mathbb{P}(T=\infty)>0$. So as $N\rightarrow\infty$, the distribution of the process conditional on $\{T>N\}$ converges to the distribution of the process conditional on survival. So we’ll ignore this for now.
In the case $\mu\le 0$, everything is encapsulated in the tail of the probabilities $\mathbb{P}(T=N)$, and these tails are qualitatively different in the cases $\mu=0$ and $\mu<0$.
When $\mu=0$, then $\mathbb{P}(T=N)$ decays polynomially in N. In the special case where $S_n$ is simple symmetric random walk (and N has the correct parity), we can check this just by an application of Stirling’s formula to count paths with this property. By contrast, when $\mu<0$, even demanding $S_N=-1$ is a large deviations event in the sense of Cramer’s theorem, and so the probability decays exponentially with N. Mogulskii’s theorem gives a large deviation principle for random walks to lie above a line defined on the scale N. The crucial fact here is that the probabilistic cost of staying positive until N has the same exponent as the probabilistic cost of being positive at N. Heuristically, we think of spreading the non-expected behaviour of the increments uniformly through the process, at only polynomial cost once we’ve specified the multiset of values taken by the increments. So, when $\mu<0$, we have
$\mathbb{P}(T\ge(1+\epsilon)N) \ll \mathbb{P}(T= N).$
Therefore, conditioning on $\{T\ge N\}$ in fact concentrates T on N+o(N). Whereas by contrast, when $\mu=0$, conditioning on $\{T\ge N\}$ gives a nontrivial limit in distribution for T/N, supported on $[1,\infty)$.
A related problem is the value taken by $S_N$, conditional on {T>N}. It’s a related problem because the event {T>N} depends only on the process up to time N, and so given the value of $S_N$, even with the conditioning, after time N, the process is just an unconditioned RW. This is a classic application of the Markov property, beloved in several guises by undergraduate probability exam designers.
Anyway, Iglehart [Ig2] shows an invariance principle for $S_N | T>N$ when $\mu<0$, without scaling. That is $S_N=\Theta(1)$, though the limiting distribution depends on the increment distribution in a sense that is best described through Laplace transforms. If we start a RW with negative drift from height O(1), then it hits zero in time O(1), so in fact this shows that conditonal on $\{T\ge N\}$, we have T= N +O(1) with high probability. When $\mu=0$, we have fluctuations on a scale $\sqrt{N}$, as shown earlier by Iglehart [Ig1]. Again, thinking about the central limit theorem, this fits the asymptotic description of T conditioned on T>N.
Conditioning on $T=N$
In the case $\mu=0$, conditioning on T=N gives
$\left[\frac{1}{\sqrt{N}}S(\lfloor Nt\rfloor ) ,t\in[0,1] \right] \Rightarrow W^+(t),$ (*)
where $W^+$ is a standard Brownian excursion on [0,1]. This is shown roughly simultaneously in [Ka] and [DIM]. This is similar to Donsker’s theorem for the unconditioned random walk, which converges after rescaling to Brownian motion in this sense, or Brownian bridge if you condition on $S_N=0$. Skorohod’s proof for Brownian bridge [Sk] approximates the event $\{S_N=0\}$ by $\{S_N\in[-\epsilon \sqrt{N},+\epsilon \sqrt{N}]\}$, since the probability of this event is bounded away from zero. Similarly, but with more technicalities, a proof of convergence conditional on T=N can approximate by $\{S_m\ge 0, m\in[\delta N,(1-\delta)N], S_N\in [-\epsilon \sqrt{N},+\epsilon\sqrt{N}]\}$. The technicalities here emerge since T, the first return time to zero, is not continuous as a function of continuous functions. (Imagine a sequence of processes $f^N$ for which $f^N(x)\ge 0$ on [0,1] and $f^N(\frac12)=\frac{1}{N}$.)
Once you condition on $T=N$, the mean $\mu$ doesn’t really matter for this scaling limit. That is, so long as variance is finite, for any $\mu\in\mathbb{R}$, the same result (*) holds, although a different proof is in general necessary. See [BD] and references for details. However, this is particularly clear in the case where the increments are Gaussian. In this setting, we don’t actually need to take a scaling limit. The distribution of Gaussian *random walk bridge* doesn’t depend on the mean of the increments. This is related to the fact that a linear transformation of a Gaussian is Gaussian, and can be seen by examining the joint density function directly.
Conditioning on $T=\infty$
When $\mu>0$, the event $\{T=\infty\}$ occurs with positive probability, so it is well-defined to condition on it. When $\mu\le 0$, this is not the case, and so we have to be more careful.
First, an observation. Just for clarity, let’s take $\mu<0$, and condition on $\{T>N\}$, and look at the distribution of $S_{\epsilon N}$, where $\epsilon>0$ is small. This is approximately given by
$\frac{S_{\epsilon N}}{\sqrt{N}}\stackrel{d}{\approx}W^+(\epsilon).$
Now take $\epsilon\rightarrow\infty$ and consider the RHS. If instead of the Brownian excursion $W^+$, we instead had Brownian motion, we could specify the distribution exactly. But in fact, we can construct Brownian excursion as the solution to an SDE:
$\mathrm{d}W^+(t) = \left[\frac{1}{W^+(t)} - \frac{W^+(t)}{1-t}\right] \mathrm{d}t + \mathrm{d}B(t),\quad t\in(0,1)$ (**)
for B a standard Brownian motion. I might return in the next post to why this is valid. For now, note that the first drift term pushes the excursion away from zero, while the second term brings it back to zero as $t\rightarrow 1$.
From this, the second drift term is essentially negligible if we care about scaling $W^+(\epsilon)$ as $\epsilon\rightarrow 0$, and we can say that $W^+(\epsilon)=\Theta(\sqrt{\epsilon})$.
So, returning to the random walk, we have
$\frac{S_{\epsilon N}}{\sqrt{\epsilon N}}\stackrel{d}{\approx} \frac{W^+(\epsilon)}{\sqrt{\epsilon}} = \Theta(1).$
At a heuristic level, it’s tempting to try ‘taking $N\rightarrow\infty$ while fixing $\epsilon N$‘, to conclude that there is a well-defined scaling limit for the RW conditioned to stay positive forever. But we came up with this estimate by taking $N\rightarrow\infty$ and then $\epsilon\rightarrow 0$ in that order. So while the heuristic might be convincing, this is not the outline of a valid argument in any way. However, the SDE representation of $W^+$ in the $\epsilon\rightarrow 0$ regime is useful. If we drop the second drift term in (**), we define the three-dimensional Bessel process, which (again, possibly the subject of a new post) is the correct scaling limit we should be aiming for.
Finally, it’s worth observing that the limit $\{T=\infty\}=\lim_{N\rightarrow\infty} \{T>N\}$ is a monotone limit, and so further tools are available. In particular, if we know that the trajectories of the random walk satisfy the FKG property, then we can define this limit directly. It feels intuitively clear that random walks should satisfy the FKG inequality (in the sense that if a RW is large somewhere, it’s more likely to be large somewhere else). You can do a covariance calculation easily, but a standard way to show the FKG inequality applies is by verifying the FKG lattice condition, and unless I’m missing something, this is clear (though a bit annoying to check) when the increments are Gaussian, but not in general. Even so, defining this monotone limit does not tell you that it is non-degenerate (ie almost-surely finite), for which some separate estimates would be required.
A final remark: in a recent post, I talked about the Skorohod embedding, as a way to construct any centered random walk where the increments have finite variance as a stopped Brownian motion. One approach to conditioning a random walk to lie above some discrete function is to condition the corresponding Brownian motion to lie above some continuous extension of that function. This is a slightly stronger conditioning, and so any approach of this kind must quantify how much stronger. In Section 4 of [BL], the authors do this for the random walk associated with the DGFF conditioned to lie above a polylogarithmic curve.
References
[BD] – Bertoin, Doney – 1994 – On conditioning a random walk to stay nonnegative
[BL] – Biskup, Louidor – 2016 – Full extremal process, cluster law and freezing for two-dimensional discrete Gaussian free field
[DIM] – Durrett, Iglehart, Miller – 1977 – Weak convergence to Brownian meander and Brownian excursion
[Ig1] – Iglehart – 1974 – Functional central limit theorems for random walks conditioned to stay positive
[Ig2] – Iglehart – 1974 – Random walks with negative drift conditioned to stay positive
[Ka] – Kaigh – 1976 – An invariance principle for random walk conditioned by a late return to zero
[KS] – Kesten, Stigum – 1966 – A limit theorem for multidimensional Galton-Watson processes
[Sk] – Skorohod – 1955 – Limit theorems for stochastic processes with independent increments
# Large Deviations 5 – Stochastic Processes and Mogulskii’s Theorem
Motivation
In the previous posts about Large Deviations, most of the emphasis has been on the theory. To summarise briefly, we have a natural idea that for a family of measures supported on the same metric space, increasingly concentrated as some index grows, we might expect the probability of seeing values in a set not containing the limit in distribution to grow exponentially. The canonical example is the sample mean of a family of IID random variables, as treated by Cramer’s theorem.
It becomes apparent that it will not be enough to specify the exponent for a given large deviation event just by taking the infimum of the rate function, so we have to define an LDP topologically, with different behaviour on open and closed sets. Now we want to find some LDPs for more complicated measures, but which will have genuinely non-trivial applications. The key idea in all of this is that the infimum present in the definition of an LDP doesn’t just specify the rate function, it also might well give us some information about the configurations or events that lead to the LDP.
The slogan for the LDP as in Frank den Hollander’s excellent book is: “A large deviation event will happen in the least unlikely of all the unlikely ways.” This will be useful when our underlying space is a bit more complicated.
Setup
As a starting point, consider the set-up for Cramer’s theorem, with IID $X_1,\ldots,X_n$. But instead of investigating LD behaviour for the sample mean, we investigate LD behaviour for the whole set of RVs. There is a bijection between sequences and the partial sums process, so we investigate the partial sums process, rescaled appropriately. For the moment this is a sequence not a function or path (continuous or otherwise), but in the limit it will be, and furthermore it won’t make too much difference whether we interpolate linearly or step-wise.
Concretely, we consider the rescaled random walk:
$Z_n(t):=\tfrac{1}{n}\sum_{i=1}^{[nt]}X_i,\quad t\in[0,1],$
with laws $\mu_n$ supported on $L_\infty([0,1])$. Note that the expected behaviour is a straight line from (0,0) to (1,$\mathbb{E}X_1$). In fact we can say more than that. By Donsker’s theorem we have a functional version of a central limit theorem, which says that deviations from this expected behaviour are given by suitably scaled Brownian motion:
$\sqrt{n}\left(\frac{Z_n(t)-t\mathbb{E}X}{\sqrt{\text{Var}(X_1)}}\right)\quad\stackrel{d}{\rightarrow}\quad B(t),\quad t\in[0,1].$
This is what we expect ‘standard’ behaviour to look like:
The deviations from a straight line are on a scale of $\sqrt{n}$. Here are two examples of potential large deviation behaviour:
Or this:
Note that these are qualitatively different. In the first case, the first half of the random variables are in general much larger than the second half, which appear to have empirical mean roughly 0. In the second case, a large deviation in overall mean is driven by a single very large value. It is obviously of interest to find out what the probabilities of each of these possibilities are.
We can do this via an LDP for $(\mu_n)$. Now it is really useful to be working in a topological context with open and closed sets. It will turn out that the rate function is supported on absolutely continuous functions, whereas obviously for finite n, none of the sample paths are continuous!
We assume that $\Lambda(\lambda)$ is the logarithmic moment generating function of X_1 as before, with $\Lambda^*(x)$ the Fenchel-Legendre transform. Then the key result is:
Theorem (Mogulskii): The measures $(\mu_n)$ satisfy an LDP on $L_\infty([0,1])$ with good rate function:
$I(\phi)=\begin{cases}\int_0^1 \Lambda^*(\phi'(t))dt,&\quad \text{if }\phi\in\mathcal{AC}, \phi(0)=0,\\ \infty&\quad\text{otherwise,}\end{cases}$
where AC is the space of absolutely continuous functions on [0,1]. Note that AC is dense in $L_\infty([0,1])$, so any open set contains a $\phi$ for which $I(\phi)$ is at least in principle finite. (Obviously, if $\Lambda^*$ is not finite everywhere, then extra restrictions of $\phi'$ are required.)
The following picture may be helpful at providing some motivation:
So what is going on is that if we take a path and zoom in on some small interval around a point, note first that behaviour on this interval is independent of behaviour everywhere else. Then the gradient at the point is the local empirical mean of the random variables around this point in time. The probability that this differs from the actual mean is given by Cramer’s rate function applied to the empirical mean, so we obtain the rate function for the whole path by integrating.
More concretely, but still very informally, suppose there is some $\phi'(t)\neq \mathbb{E}X$, then this says that:
$Z_n(t+\delta t)-Z_n(t)=\phi'(t)\delta t+o(\delta t),$
$\Rightarrow\quad \mu_n\Big(\phi'(t)\delta t+o(\delta t)=\frac{1}{n}\sum_{i=nt+1}^{n(t+\delta t)}X_i\Big),$
$= \mu_n\Big( \phi'(t)+o(1)=\frac{1}{n\delta t}\sum_{i=1}^{n\delta t}X_i\Big)\sim e^{-n\delta t\Lambda^*(\phi'(t))},$
by Cramer. Now we can use independence:
$\mu_n(Z_n\approx \phi)=\prod_{\delta t}e^{-n\delta t \Lambda^*(\phi'(t))}=e^{-\sum_{\delta t}n\delta t \Lambda^*(\phi'(t))}\approx e^{-n\int_0^1 \Lambda^*(\phi'(t))dt},$
as in fact is given by Mogulskii.
Remarks
1) The absolutely continuous requirement is useful. We really wouldn’t want to be examining carefully the tail of the underlying distribution to see whether it is possible on an exponential scale that o(n) consecutive RVs would have sum O(n).
2) In general $\Lambda^*(x)$ will be convex, which has applications as well as playing a useful role in the proof. Recalling den Hollander’s mantra, we are interested to see where infima hold for LD sets in the host space. So for the event that the empirical mean is greater than some threshold larger than the expectation, Cramer’s theorem told us that this is exponentially the same as same the empirical mean is roughly equal to the threshold. Now Mogulskii’s theorem says more. By convexity, we know that the integral functional for the rate function is minimised by straight lines. So we learn that the contributions to the large deviation are spread roughly equally through the sample. Note that this is NOT saying that all the random variables will have the same higher than expected value. The LDP takes no account of fluctuations in the path on a scale smaller than n. It does however rule out both of the situations pictured a long way up the page. We should expect to see roughly a straight line, with unexpectedly steep gradient.
3) The proof as given in Dembo and Zeitouni is quite involved. There are a few stages, the first and simplest of which is to show that it doesn’t matter on an exponential scale whether we interpolate linearly or step-wise. Later in the proof we will switch back and forth at will. The next step is to show the LDP for the finite-dimensional problem given by evaluating the path at finitely many points in [0,1]. A careful argument via the Dawson-Gartner theorem allows lifting of the finite-dimensional projections back to the space of general functions with the topology of pointwise convergence. It remains to prove that the rate function is indeed the supremum of the rate functions achieved on projections. Convexity of $\Lambda^*(x)$ is very useful here for the upper bound, and this is where it comes through that the rate function is infinite when the comparison path is not absolutely continuous. To lift to the finer topology of $L_\infty([0,1])$ requires only a check of exponential tightness in the finer space, which follows from Arzela-Ascoli after some work.
In conclusion, it is fairly tricky to prove even this most straightforward case, so unsurprisingly it is hard to extend to the natural case where the distributions of the underlying RVs (X) change continuously in time, as we will want for the analysis of more combinatorial objects. Next time I will consider why it is hard but potentially interesting to consider with adaptations of these techniques an LDP for the size of the largest component in a sparse random graph near criticality.
# Poisson Tails
I’ve had plenty of ideas for potential probability posts recently, but have been a bit too busy to write any of them up. I guess that’s a good thing in some sense. Anyway, this is a quick remark based on an argument I was thinking about yesterday. It combines Large Deviation theory, which I have spent a lot of time learning about this year, and the Poisson process, which I have spent a bit of time teaching.
Question
Does the Poisson distribution have an exponential tail? I ended up asking this question for two completely independent reasons yesterday. Firstly, I’ve been reading up about some more complex models of random networks. Specifically, the Erdos-Renyi random graph is interesting mathematical structure in its own right, but the independent edge condition results in certain regularity properties which are not seen in many real-world networks. In particular, the degree sequence of real-world networks typically follows an approximate power law. That is, the tail is heavy. This corresponds to our intuition that most networks contain ‘hubs’ which are connected to a large region of the network. Think about key servers or websites like Wikipedia and Google which are linked to by millions of other pages, or the social butterfly who will introduce friends from completely different circles. In any case, this property is not observed in an Erdos-Renyi graph, where the degrees are binomial, and in the sparse situation, rescale in the limit to a Poisson distribution. So, to finalise this observation, we want to be able to prove formally that the Poisson distribution has an exponential (so faster than power-law) tail.
The second occurrence of this question concerns large deviations for the exploration process of a random graph. This is a topic I’ve mentioned elsewhere (here for the exploration process, here for LDs) so I won’t recap extensively now. Anyway, the results we are interested in give estimates for the rate of decay in probability for the event that the path defined by the exploration process differs substantially from the expected path as n grows. A major annoyance in this analysis is the possibility of jumps. A jump occurs if a set of o(n) adjacent underlying random variables (here, the increments in the exploration process) have O(n) sum. A starting point might be to consider whether O(1) adjacent RVs can have O(n) sum, or indeed whether a single Poisson random variable can have sum of order n. In practice, this asks whether the probability $\mathbb{P}(X>\alpha n)$ decays faster than exponentially in n. If it does, then this is dominated on a large deviations scale. If it decays exactly exponentially in n, then we have to consider such jumps in the analysis.
Approach
We can give a precise statement of the probabilities that a Po($\lambda$) random variable X returns a given integer value:
$\mathbb{P}(X=k)=e^{-\lambda}\frac{\lambda^k}{k!}.$
Note that these are the terms in the Taylor expansion of $e^{\lambda}$ appropriately normalised. So, while it looks like it should be possible to evaluate
$\mathbb{P}(X>\alpha n)=e^{-\lambda}\sum_{\alpha n}^\infty \frac{\lambda^k}{k!},$
this seems impossible to do directly, and it isn’t even especially obvious what a sensible bounding strategy might be.
The problem of estimating the form of the limit in probability of increasing unlikely deviations from expected behaviour surely reminds us of Cramer’s theorem. But this and other LD theory is generally formulated in terms of n random variables displaying some collective deviation, rather than a single random variable, with the size of the deviation growing. But we can transform our problem into that form by appealing to the three equivalent definitions of the Poisson process.
Recall that the Poisson process is the canonical description of, say, an arrivals process, where events in disjoint intervals are independent, and the expected number of arrives in a fixed interval is proportional to the width of the interval, giving a well-defined notion of ‘rate’ as we would want. The two main ways to define the process are: 1) the times between arrivals are given by i.i.d. Exponential RVs with parameter $\lambda$ equal to the rate; and 2) the number of arrivals in interval [s,t] is independent of all other times, and has distribution given by Po($\lambda(t-s)$). The fact that this definition gives a well-defined process is not necessarily obvious, but let’s not discuss that further here.
So the key equivalence to be exploited is that the event $X>n$ for $X\sim \text{Po}(\lambda)$ is a statement that there are at least n arrivals by time 1. If we move to the exponential inter-arrival times definition, we can write this as:
$\mathbb{P}(Z_1+\ldots+Z_n<1),$
where the Z’s are the i.i.d. exponential random variables. But this is exactly what we are able to specify through Cramer’s theorem. Recall that the moment generating function of an exponential distribution is not finite everywhere, but that doesn’t matter as we construct our rate function by taking the supremum over some index t of:
$I(x)=\sup_t (xt-\log \mathbb{E}e^{tZ_1})=\sup_t(xt-\log(\frac{\lambda}{\lambda-t})).$
A simple calculation then gives
$I(x)=\lambda x-1 - \log \lambda x.$
$\Rightarrow I(x)\uparrow \infty\text{ as }x\downarrow 0.$
Note that I(1) is the same for both Exp($\lambda$) and Po($\lambda$), because of the PP equality of events:
$\{Z_1+\ldots+Z_n\leq n\}=\{\text{Po}(\lambda n)=\text{Po}(\lambda)_1+\ldots+\text{Po}(\lambda)_n> n\},$
similar to the previous argument. In particular, for all $\epsilon>0$,
$\mathbb{P}(\text{Po}(\lambda)>n)=\mathbb{P}(\frac{Z_1+\ldots+Z_n}{n}<\frac{1}{n})<\mathbb{P}(\frac{Z_1+\ldots+Z_n}{n}<\epsilon),\text{ for large }n.$
$\mathbb{P}(\text{Po}(\lambda)>n)=O(e^{-nI(\epsilon)}),\text{ for all }\epsilon.$
Since we can take $I(\epsilon)$ as large as we want, we conclude that the probability decays faster than exponentially in n.
# Large Deviations 4 – Sanov’s Theorem
Although we could have defined things for a more general topological space, most of our thoughts about Cramer’s theorem, and the Gartner-Ellis theorem which generalises it, are based on means of real-valued random variables. For Cramer’s theorem, we genuinely are interested only in means of i.i.d. random variables. In Gartner-Ellis, one might say that we are able to relax the condition on independence and perhaps identical distribution too, in a controlled way. But this is somewhat underselling the theorem: using G-E, we can deal with a much broader category of measures than just means of collections of variables. The key is that convergence of the log moment generating function is exactly enough to give a LDP with some rate, and we have a general method for finding the rate function.
So, Gartner-Ellis provides a fairly substantial generalisation to Cramer’s theorem, but is still similar in flavour. But what about if we look for additional properties of a collection of i.i.d. random variables $(X_n)$. After all, the mean is not the only interesting property. One thing we could look at is the actual values taken by the $X_n$s. If the underlying distribution is continuous, this is not going to give much more information than what we started with. With probability, $\{X_1,\ldots,X_n\}$ is a set of size n, with distribution given by the product of the underlying measure. However, if the random variables take values in a discrete set, or better still a finite set, then $(X_1,\ldots,X_n)$ gives a so-called empirical distribution.
As n grows towards infinity, we expect this empirical distribution to approximate the real underlying distribution fairly well. This isn’t necessarily quite as easy as it sounds. By the strong law of large numbers applied to indicator functions $1(X_i\leq t)$, the empirical cdf at t converges almost surely to the true cdf at t. To guarantee that this convergence is uniform in t is tricky in general (for reference, see the Glivenko-Cantelli theorem), but is clear for random variables defined on finite sets, and it seems reasonable that an extension to discrete sets should be possible.
So such empirical distributions might well admit an LDP. Note that in the case of Bernoulli random variables, the empirical distribution is in fact exactly equivalent to the empirical mean, so Cramer’s theorem applies. But, in fact we have a general LDP for empirical distributions. I claim that the main point of interest here is the nature of the rate function – I will discuss why the existence of an LDP is not too surprising at the end.
The rate function is going to be interesting whatever form it ends up taking. After all, it is effectively going to some sort of metric on measures, as it records how far a possible empirical measure is from the true distribution. Apart from total variation distance, we don’t currently have many standard examples for metrics on a space of measures. Anyway, the rate function is the main content of Sanov’s theorem. This has various forms, depending on how fiddly you are prepared for the proof to be.
Define $L_n:=\sum_{i=1}^n \delta_{X_i}\in\mathcal{M}_1(E)$ to be the empirical measure generated by $X_1,\ldots,X_n$. Then $L_n$ satisfies an LDP on $\mathcal{M}_1(E)$ with rate n and rate function given by $H(\cdot,\mu)$, where $\mu$ is the underlying distribution.
The function H is the relative entropy, defined by:
$H(\nu|\mu):=\int_E \log\frac{\nu(x)}{\mu(x)}d\nu(v),$
whenever $\nu<<\mu$, and $\infty$ otherwise. We can see why this absolute continuity condition is required from the statement of the LDP. If the underlying distribution $\mu$ has measure zero on some set A, then the observed values will not be in A with probability 1, and so the empirical measure will be zero on A also.
Note that an alternative form is:
$H(\nu|\mu)=\int_E \frac{\nu(x)}{\mu(x)}\log\frac{\nu(x)}{\mu(x)}d\mu(v)=\mathbb{E}_\nu\frac{\nu(x)}{\mu(x)}\log\frac{\nu(x)}{\mu(x)}.$
Perhaps it is more clear why this expectation is something we would want to minimise.
In particular, if we want to know the most likely asymptotic empirical distribution inducing a large deviation empirical mean (as in Cramer), then we find the distribution with suitable mean, and smallest entropy relative to the true underlying distribution.
A remark on the proof. If the underlying set of values is finite, then a proof of this result is essentially combinatorial. The empirical distribution is some multinomial distribution, and we can obtain exact forms for everything and then proceed with asymptotic approximations.
I said earlier that I would comment on why the LDP is not too surprising even in general, once we know Gartner-Ellis. Instead of letting $X_i$ take values in whatever space we were considering previously, say the reals, consider instead the point mass function $\delta_{X_i}$ which is effectively exactly the same random variable, only now defined on the space of probability measures. The empirical measure is then exactly:
$\frac{1}{n}\sum_{i=1}^n \delta_{X_i}.$
If the support K of the $(X_i)$s is finite, then in fact this space of measures is a convex subspace of $\mathbb{R}^K$, and so the multi-dimensional version of Cramer’s theorem applies. In general, we can work in the possibly infinite-dimensional space $[0,1]^K$, and our relevant subset is compact, as a closed subset of a compact space (by Tychonoff). So the LDP in this case follows from our previous work.
# Large Deviations 3 – Gartner-Ellis Theorem: Where do the all terms come from?
We want to drop the i.i.d. assumption from Cramer’s theorem, to get a criterion for a general LDP as defined in the previous post to hold.
Preliminaries
For general random variables $(Z_n)$ on $\mathbb{R}^d$ with laws $(\mu_n)$, we will continue to have an upper bound like in Cramer’s theorem, provided the moment generating functions of $Z_n$ converge as required. For analogy with Cramer, take $Z_n=\frac{S_n}{n}$. The Gartner-Ellis theorem gives conditions for the existence of a suitable lower bound and, in particular, when this is the same as the upper bound.
We define the logarithmic moment generating function
$\Lambda_n(\lambda):=\log\mathbb{E}e^{\langle \lambda,Z_n\rangle},$
and assume that the limit
$\Lambda(\lambda)=\lim_{n\rightarrow\infty}\frac{1}{n}\Lambda_n(n\lambda)\in[-\infty,\infty],$
exists for all $\lambda\in\mathbb{R}^d$. We also assume that $0\in\text{int}(\mathcal{D}_\Lambda)$, where $\mathcal{D}_\Lambda:=\{\lambda\in\mathbb{R}^d:\Lambda(\lambda)<\infty\}$. We also define the Fenchel-Legendre transform as before:
$\Lambda^*(x)=\sup_{\lambda\in\mathbb{R}^d}\left[\langle x,\lambda\rangle - \Lambda(\lambda)\right],\quad x\in\mathbb{R}^d.$
We say $y\in\mathbb{R}^d$ is an exposed point of $\Lambda^*$ if for some $\lambda$,
$\langle \lambda,y\rangle - \Lambda^*(y)>\langle\lambda,x\rangle - \Lambda^*(x),\quad \forall x\in\mathbb{R}^d.$
Such a $\lambda$ is then called an exposing hyperplane. One way of thinking about this definition is that $\Lambda^*(x)$ is convex, but is strictly convex in any direction at an exposed point. Alternatively, at an exposed point y, there is a vector $\lambda$ such that $\Lambda^*\circ \pi_\lambda$ has a global minimum or maximum at y, where $\pi_\lambda$ is the projection into $\langle \lambda\rangle$. Roughly speaking, this vector is what we will to take the Cramer transform for the lower bound at x. Recall that the Cramer transform is an exponential reweighting of the probability density, which makes a previously unlikely event into a normal one. We may now state the theorem.
Gartner-Ellis Theorem
With the assumptions above:
1. $\limsup_{n\rightarrow\infty}\frac{1}{n}\log \mu_n(F)\leq -\inf_{x\in F}\Lambda^*(x)$, $\forall F\subset\mathbb{R}^d$ closed.
2. $\liminf_{n\rightarrow\infty}\frac{1}{n}\log \mu_n(G)\geq -\inf_{x\in G\cap E}\Lambda^*(x)$, $\forall G\subset\mathbb{R}^d$ open, where E is the set of exposed points of $\Lambda^*$ whose exposing hyperplane is in $\text{int}(\mathcal{D}_\Lambda)$.
3. If $\Lambda$ is also lower semi-continuous, and is differentiable on $\text{int}(\mathcal{D}_\Lambda)$ (which is non-empty by the previous assumption), and is steep, that is, for any $\lambda\in\partial\mathcal{D}_\Lambda$, $\lim_{\nu\rightarrow\lambda}|\nabla \Lambda(\nu)|=\infty$, then we may replace $G\cap E$ by G in the second statement. Then $(\mu_n)$ satisfies the LDP on $\mathbb{R}^d$ with rate n and rate function $\Lambda^*$.
Where do all the terms come from?
As ever, because everything is on an exponential scale, the infimum in the statements affirms the intuitive notion that in the limit, “an unlikely event will happen in the most likely of the possible (unlikely) ways”. The reason why the first statement does not hold for open sets in general is that the infimum may not be attained for open sets. For the proof, we need an exposing hyperplane at x so we can find an exponential tilt (or Cramer transform) that makes x the standard outcome. Crucially, in order to apply probabilistic ideas to the resulting distribution, everything must be normalisable. So we need an exposing hyperplane so as to isolate the point x on an exponential scale in the transform. And the exposing hyperplane must be in $\mathcal{D}_\Lambda$ if we are to have a chance of getting any useful information out of the transform. By convexity, this is equivalent to the exposing hyperplane being in $\text{int}(\mathcal{D}_\Lambda)$.
# Large Deviations 2 – LDPs, Rate Functions and Lower Semi-Continuity
Remarks from Cramer’s Theorem
So in the previous post we discussed Cramer’s theorem on large deviations for means of i.i.d. random variables. It’s worth stepping back and thinking more abstractly about what we showed. Each $S_n$ has some law, which we think of as a measure on $\mathbb{R}$, though this could equally well be some other space, depending on where the random variables are supported. The law of large numbers asserts that as $n\rightarrow\infty$, these measures are increasingly concentrated at a single point in $\mathbb{R}$, which in this case is $\mathbb{E}X_1$. Cramer’s theorem then asserts that the measure of certain sets not containing this point of concentration decays exponentially in n, and quantifies the exponent, a so-called rate function, via a Legendre transform of the log moment generating function of the underlying distribution.
One key point is that we considered only certain sets $[a,\infty),\,a>\mathbb{E}X_1$, though we could equally well have considered $(-\infty,a],\,a<\mathbb{E}X_1$. What would happen if we wanted to consider an interval, say $[a,b],\,\mathbb{E}X_1? Well, $\mu_n([a,b])=\mu_n([a,\infty))-\mu_n((b,\infty))$, and we might as well assume that $\mu_n$ is sufficiently continuous, at least in the limit, that we can replace the open interval bound with a closed one. Then Cramer’s theorem asserts, written in a more informal style, that $\mu_n([a,\infty))\sim e^{-nI(a)}$ and similarly for $[b,\infty)$. So provided $I(a), we have
$\mu_n([a,b])\sim e^{-nI(a)}-e^{-nI(b)}\sim e^{-nI(a)}.$
To in order to accord with our intuition, we would like I(x) to be increasing for $x>\mathbb{E}X_1$, and decreasing for $x<\mathbb{E}X_1$. Also, we want $I(\mathbb{E}X_1)=0$, to account for the fact that $\mu_n([\mathbb{E}X_1,\infty))=O(1)$. For each consider a sequence of coin tosses. The probability that the observed proportion of heads is in $[\frac12,1]$ should be roughly 1/2 for all n.
Note that in the previous displayed equation for $\mu_n([a,b])$ the right hand side has no dependence on b. Informally, this means that any event which is at least as unlikely as the event of a deviation to a, will in the limit happen in the most likely of the unlikely ways, which will in this case be a deviation to a, because of relative domination of exponential functions. So if, rather than just half-lines and intervals, we wanted to consider more general sets, we might conjecture a result of the form:
$\mu_n(\Gamma)\sim e^{-n\inf_{z\in\Gamma}(z)},$
with the approximation defined formally as in the statement of Cramer’s theorem. What can go wrong?
Large Deviations Principles
Well, if the set $\Gamma=\{\gamma\}$ a single point, and the underlying distribution is continuous, then we would expect $\mu_n(\{\gamma\})=0$ for all n. Similarly, we would expect $\mu_n((\mathbb{E}X_1,\infty))\sim O(1)$, but there is no a priori reason why I(z) should be continuous at $\mathbb{E}X_1$. (In fact, this is false.), so taking $\Gamma=(\mathbb{E}X_1,\infty)$ again gives a contradiction.
So we need something a bit more precise. Noting that the problem here is that measure (in this case, measure of likeliness on an exponential scale) can leak into open sets through the boundary in the limit, and also the rate function requires some sort of neighbourhood to make sense for continuous RVs, so boundaries of closed sets may give an overestimate. This is reminiscent of weak convergence, and motivated by this, the appropriate general definition for a Large Deviation Principle is:
A sequence of measure $(\mu_n)$ on some space E satisfies an LDP with rate function I and speed n if $\forall \Gamma\in \mathcal{B}(E)$:
$-\inf_{x\in\Gamma^\circ}I(x)\leq \liminf \frac{1}{n}\log\mu_n(\Gamma)\leq \limsup\frac{1}{n}\log\mu_n(\Gamma)\leq -\inf_{x\in \bar{\Gamma}}I(x).$
Although this might look very technical, you might as well think of it as nothing more than the previous conjecture for general sets, with the two problems that we mentioned now taken care of.
So, we need to define a rate function. $I: E\rightarrow[0,\infty]$ is a rate function, if it not identically infinite. We also demand that it is lower semi-continuous, and has closed level sets $\Psi_I^\alpha:=\{x\in E: I(x)\leq\alpha\}$. These definitions are in fact equivalent. I will say what lower semi-continuity is in a moment. Some authors also demand that the level sets be compact. Others call this a good rate function, or similar. The advantage of this is that infima on closed sets are attained.
It is possible to specify a different rate. The rate gives the speed of convergence. $\frac 1 n$ can be replaced with any function converging to 0, including continuously.
Lower Semi-Continuity
A function f is lower semi-continuous if
$f(x)\leq \liminf f(x_n),\text{ for all sequences }x_n\rightarrow x.$
One way of thinking about this definition is to say that the function cannot jump upwards as it reaches a boundary, it can only jump downwards (or not jump at all). The article on Wikipedia for semi-continuity has this picture explaining how a lower semi-continuous function must behave at discontinuities. Note that the value of f at the discontinuity could be the blue dot, or anything less than the blue dot. It is reasonable clear why this definition is equivalent to having closed level sets.
So the question to ask is: why should rate functions be lower semi-continuous? Rather than proceeding directly, we argue by uniqueness. Given a function on $\mathbb{R}$ with discontinuities, we can turn it into a cadlag function, or a caglad function by fiddling with the values taken at points of discontinuity. We can do a similar thing to turn any function into a lower semi-continuous function. Given f, we define
$f_*(x):=\liminf_{x_n\rightarrow x}f(x_n)=\sup\{\inf_G f: x\ni G, G \text{ open}\}.$
The notes I borrowed this idea from described this as the maximal lower semi-continuous regularisation, which I think is quite a good explanation despite the long words.
Anyway, the claim is that if $I(x)$ satisfies a LDP then so does $I_*(x)$. This needs to be checked, but it explains why we demand that the rate function be lower semi-continuous. We really want the rate function not to be unique, and this is a good way to prevent an obvious cause of non-uniqueness. It needs to be checked that it is actually unique once we have this assumption, but that is relatively straightforward.
So, to check that the lower semi-continuous regularisation of I satisfies the LDP if I does, we observe that the upper bound is trivial, since $I^*\leq I$ everywhere. Then, for every open set G, note that for $x\in G, I_*(x)=\liminf_{x_n\rightarrow x}I(x)$, so we might as well consider sequences within G, and so $I_*(x)\geq \inf \inf_G I$. So, since $I_*(x)\leq I(x)$, it follows that
$\inf_G I_*=\inf_G I,$
and thus we get the upper bound for the LDP.
References
The motivation for this particular post was my own, but the set of notes here, as cited in the previous post were very useful. Also the Wikipedia page on semi-continuity, and Frank den Hollander’s book ‘Large Deviations’.
# Large Deviations 1 – Motivation and Cramer’s Theorem
I’ve been doing a lot of thinking about Large Deviations recently, in particular how to apply the theory to random graphs and related models. I’ve just writing an article about some of the more interesting aspects, so thought it was probably worth turning it into a few posts.
Motivation
Given $X_1,X_2,\ldots$ i.i.d. real-valued random variables with finite expectation, and $S_n:=X_1+\ldots+X_n$, the Weak Law of Large Numbers asserts that the empirical mean $\frac{S_n}{n}$ converges in distribution to $\mathbb{E}X_1$. So $\mathbb{P}(S_n\geq n(\mathbb{E}X_1+\epsilon))\rightarrow 0$. In fact, if $\mathbb{E}X_1^2<\infty$, we have the Central Limit Theorem, and a consequence is that $\mathbb{P}(S_n\geq n\mathbb{E}X_1+n^\alpha)\rightarrow 0$ whenever $\alpha>\frac12$.
In a concrete example, if we toss a coin some suitably large number of times, the probability that the proportion of heads will be substantially greater or smaller than $\frac12$ tends to zero. So the probability that at least $\frac34$ of the results are heads tends to zero. But how fast? Consider first four tosses, then eight. A quick addition of the relevant terms in the binomial distribution gives:
$\mathbb{P}\left(\text{At least }\tfrac34\text{ out of four tosses are heads}\right)=\frac{1}{16}+\frac{4}{16}=\frac{5}{16},$
$\mathbb{P}\left(\text{At least }\tfrac34\text{ out of twelve tosses are heads}\right)=\frac{1}{2^{12}}+\frac{12}{2^{12}}+\frac{66}{2^{12}}+\frac{220}{2^{12}}=\frac{299}{2^{12}}.$
There are two observations to be made. The first is that the second is substantially smaller than the first – the decay appears to be relatively fast. The second observation is that $\frac{220}{2^{12}}$ is substantially larger than the rest of the sum. So by far the most likely way for at least $\tfrac34$ out of twelve tosses to be heads is if exactly $\tfrac34$ are heads. Cramer’s theorem applies to a general i.i.d. sequence of RVs, provided the tail is not too heavy. It show that the probability of any such large deviation event decays exponentially with n, and identifies the exponent.
Theorem (Cramer): Let $(X_i)$ be i.i.d. real-valued random variables which satisfy $\mathbb{E}e^{tX_1}<\infty$ for every $t\in\mathbb{R}$. Then for any $a>\mathbb{E}X_1$,
$\lim_{n\rightarrow \infty}\frac{1}{n}\log\mathbb{P}(S_n\geq an)=-I(a),$
$\text{where}\quad I(z):=\sup_{t\in\mathbb{R}}\left[zt-\log\mathbb{E}e^{tX_1}\right].$
Remarks
• So, informally, $\mathbb{P}(S_n\geq an)\sim e^{-nI(a)}$.
• I(z) is called the Fenchel-Legendre transform (or convex conjugate) of $\log\mathbb{E}e^{tX_1}$.
• Considering t=0 confirms that $I(z)\in[0,\infty]$.
• In their extremely useful book, Dembo and Zeitouni present this theorem in greater generality, allowing $X_i$ to be supported on $\mathbb{R}^d$, considering a more general set of large deviation events, and relaxing the requirement for finite mean, and thus also the finite moment generating function condition. All of this will still be a special case of the Gartner-Ellis theorem, which will be examined in a subsequent post, so we make do with this form of Cramer’s result for now.
The proof of Cramer’s theorem splits into an upper bound and a lower bound. The former is relatively straightforward, applying Markov’s inequality to $e^{tS_n}$, then optimising over the choice of t. This idea is referred to by various sources as the exponential Chebyshev inequality or a Chernoff bound. The lower bound is more challenging. We reweight the distribution function F(x) of $X_1$ by a factor $e^{tx}$, then choose t so that the large deviation event is in fact now within the treatment of the CLT, from which suitable bounds are obtained.
To avoid overcomplicating this initial presentation, some details have been omitted. It is not clear, for example, whether I(x) should be finite whenever x is in the support of $X_1$. (It certainly must be infinite outside – consider the probability that 150% or -40% of coin tosses come up heads!) In order to call this a Large Deviation Principle, we also want some extra regularity on I(x), not least to ensure it is unique. This will be discussed in the next posts.
# Analytic vs Probabilistic Arguments for a Supercritical BP
This follows on directly from the previous post. I was originally going to talk only about what follows, but I got rather carried away with the branching process account. I was stuck on a particular exercise, and we ended up coming up with two arguments: one analytic and one probabilistic. Since the typical flavour of this blog is to present problems which show the advantage of the probabilistic approach, it seems only fair to remark on this case, where the analytic method was less interesting, but much simpler.
Recall that we have a supercritical random graph $G(n,\frac{\lambda}{n}), \lambda>1$, and we are considering the rescaled exploration process $S_{nt}$, which has asymptotic mean $\mu_t=1-t-e^{-\lambda t}$. We can calculate similarly an expression for the asymptotic variance
$\frac{\text{Var}(S_{nt})}{n}\rightarrow v_t=e^{-\lambda t}(1-e^{-\lambda t}).$
To use this to verify the result about the size of the giant component, we verify that $\mu_{\zeta_\lambda+x/\sqrt{n}}$ is negative, and has small variance, which would confirm that the giant component has size bounded above by $\zeta_\lambda$ almost surely. A similar argument is required for the lower bound. The variance is a separate matter, but it is therefore necessary that $\mu_t$ should be decreasing at $t=\zeta_\lambda$, that is $\mu_t'=\lambda e^{-\lambda \zeta_\lambda}<0$. This is what we try to prove in the remainder of this post. Recall that in the previous post we have checked that it is equal to zero here.
Heuristic Explanation
$\mu_t$ has been rescaled from the original definition of the exploration process in both size and time-scale so some care is needed to see why this should hold in the limit. Remember that all components apart from the giant component are of size O(log n). So immediately after exhausting the giant component, you are likely to be visiting components of size roughly log n. A time interval of dt for $\mu$ corresponds to ndt for S, during which S will visit some components of size log n and some of O(1) and some in between. In particular, some fixed proportion of vertices are isolated, that is, in a component of size 1.
There is then a complicated size-biasing train of thought. A component of size log n is more likely to come up than an isolated vertex, but there are not as many of them. The log n components push the derivative $\mu_t'$ towards zero, because S_t decreases by 1 over a time-interval of length log n, which gives a gradient of zero in the limit. However, the isolated vertices give a gradient of -1, because S_t decreases by 1 over a time interval of 1. Despite the fact that log n intervals are likely to appear earlier, it still remains the case that after exhausting a component (in particular, at time $t=\zeta_\lambda$, after exhausting the giant component), with some bounded below positive probability you will choose an isolated vertex next. The component size only affects that time-scale if it is O(n), which none of the remaining components are, so the derivative $\mu_{\zeta_\lambda}'$ consists of some complicated weighted mean of 0 and -1. In particular, it is negative.
Analytic solution
Obviously, that won’t do in practice. Suppressing lambdas for ease of notation, the key fact is: $e^{-\lambda \zeta}=1-\zeta$. We want to show that $\lambda e^{-\lambda \zeta}<1$. Substituting
$\lambda=-\frac{\log(1-\zeta)}{\zeta},$
means that it is required to show:
$-\frac{1-\zeta}{\zeta}\log(1-\zeta)<1.$
Differentiating the left hand side gives:
$\frac{\log(1-\zeta)+\zeta}{\zeta^2}<0,$
since of course $\log(1-\zeta)=\zeta+\frac{\zeta^2}{2}+\frac{\zeta^3}{3}+\dots$. So it suffice to check the result for small $\zeta$. But, again using a Taylor series:
$-\frac{1-\zeta}{\zeta}\log(1-\zeta)=1-\frac12\zeta+O(\zeta^2)<1,$
for small $\zeta$. This gives the required result.
Probabilistic Interpretation and Solution
First, we observe that $\lambda e^{-\lambda\zeta}=\lambda(1-\zeta)$ is the expected number of vertices in the first generation of a $\text{Po}(\lambda)$ whose progeny become extinct. This motivates considering the canonical decomposition of a supercritical branching process Z into the skeleton process and the dual process. The skeleton $Z^+$ consists of all vertices which have infinitely many successors. It is relatively easy to show that this is a branching process with offspring distribution $\text{Po}(\lambda\zeta)$ conditioned on being positive. The dual process $Z^*$ is a G-W branching process with offspring distribution $\text{Po}(\lambda)$ conditioned on dying. This is the same as a branching process with offspring distribution $\text{Po}(\lambda(1-\zeta)$, by a sprinkling argument, which says that if we begin with a Poisson number of things, then remove each one independently with some fixed probability, the remaining number of things is Poisson also.
We can construct the original branching process by
• With probability $\zeta$, take the skeleton, and affixe independent copies of $Z^*$ at every vertex in the skeleton.
• With probability $1-\zeta$, just take a copy of $Z^*$.
It is immediately clear that $\lambda(1-\zeta)\leq 1$. After all, the dual process is almost surely finite, so the offspring distribution cannot have expectation greater than 1. Checking that this is strong is more fiddly. The best way I have come up with is to examine the tail of the distribution of total population size of the original branching process.
The total population size T of a branching process has an exponential tail if the offspring distribution is subcritical. It isn’t hugely surprising that this behaves like a large deviation for iid RVs, since in the limit such an event requires a lot of the offspring counts to deviate substantially from the mean. The same holds in the supercritical case, with the additional complication that though the finite tail decays exponential, there is positive probability that the total size will be infinite. In the critical case, however, there is a power-law decay. This is not hugely surprising as it marks the threshhold for the appearance of the infinite population, just as in a multiplicative coalescent at time 1, we have a load of very large components just about to form a giant component. The tool for all of these results is Dwass’s Theorem, which says:
$\mathbb{P}(T=n)=\frac{1}{n}\mathbb{P}(X_1+\ldots+X_n=n-1),$
where $X_1$ are iid with the offspring distribution. When $\mathbb{E}X_1\neq 1$, this is a large deviation event, for which Cramer’s theorem applies (assuming, as is the case for the Poisson distribution, that the offspring distribution has finite variance). When, $\mathbb{E}X=1$, the Central Limit Theorem says that with high probability,
$X_1+\ldots+X_n\in [n-n^{3/4},n+n^{3/4}],$
so, skating over the details of whether everything is exactly uniform within this CLT scaling window,
$\mathbb{P}(T=n)\geq \frac{1}{n}\cdot\frac{1}{2n^{3/4}}.$
The true exponent of the power law decay is substantially slower than this, but the above argument works as a back-of-the-envelope bound.
In particular, if the dual process has mean 1, then the population size of the original branching process is given by taking a distribution with exponential tail with some probability and a distribution with power-law tail with some probability. Obviously the power-law will dominate, which contradicts the assumption that the original branching process was supercritical, and so has an exponential tail.
# Branching Processes and Dwass’s Theorem
This is something I had to think about when writing my Part III essay, and it turns out to be relevant to some of the literature I’ve been reading this week. The main result is hugely helpful for reducing a potentially complicated combinatorial object to a finite sum of i.i.d. random variables, which in general we do know quite a lot about. I was very pleased with the proof I came up with while writing the essay, even if in the end it turned out to have appeared elsewhere before. (Citation at end)
Galton-Watson processes
A Galton-Watson process is a stochastic process describing a simple model for evolution of a population. At each stage of the evolution, a new generation is created as every member of the current generation produces some number of `offspring’ with identical and independent (both across all generations and within generations) distributions. Such processes were introduced by Galton and Watson to examine the evolution of surnames through history.
More precisely, we specify an offspring distribution, a probability distribution supported on $\mathbb{N}_0$. Then define a sequence of random variables $(Z_n,n\in\mathbb{N})$ by:
$Z_{n+1}=Y_1^n+\ldots+Y_{Z_n}^n,$
where $(Y_k^n,k\geq 1,n\geq 0)$ is a family of i.i.d. random variables with the offspring distribution $Y$. We say $Z_n$ is the size of the $n$th generation. From now on, assume $Z_0=1$ and then we call $(Z_n,n\geq 0)$ a Galton-Watson process. We also define the total population size to be
$X:=Z_0+Z_1+Z_2+\ldots,$
noting that this might be infinite. We refer to the situation where $X<\infty$ finite as extinction, and can show that extinction occurs almost surely when $\mathbb{E}Y\leq 1$, excepting the trivial case $Y=\delta_1$. The strict inequality parts are as you would expect. We say the process is critical if $\mathbb{E}Y=1$, and this is less obvious to visualise, but works equally well in the proof, which is usually driven using generating functions.
Total Population Size and Dwass’s Theorem
Of particular interest is $X$, the total population size, and its distribution. The following result gives us a precise and useful result linking the probability of the population having size $n$ and the distribution of the sum of $n$ RVs with the relevant offspring distribution. Among the consequences are that we can conclude immediately, by CLT and Cramer’s Large Deviations Theorem, that the total population size distribution has power-law decay in the critical case, and exponential decay otherwise.
Theorem (Dwass (1)): For a general branching process with a single time-0 ancestor and offspring distribution $Y$ and total population size $X$:
$\mathbb{P}(X=k)=\frac{1}{k}\mathbb{P}(Y^1+\ldots+ Y^k=k-1),\quad k\geq 1$
where $Y^1,\ldots,Y^k$ are independent copies of $Y$.
We now give a proof via a combinatorial argument. The approach is similar to that given in (2). Much of the literature gives a proof using generating functions.
Proof: For motivation, consider the following. It is natural to consider a branching process as a tree, with the time-0 ancestor as the root. Suppose the event $\{X=k\}$ in holds, which means that the tree has $k$ vertices. Now consider the numbers of offspring of each vertex in the tree. Since every vertex except the root has exactly one parent, and there are no vertices outside the tree, we must have $Y^1+\ldots+Y^k=k-1$ where $Y^1,\ldots,Y^k$ are the offspring numbers in some order. However, observe that this is not sufficient. For example, if $Y^1$ is the number of offspring of the root, and $k\geq 2$, then we must have $Y^1\geq 1$. Continue reading
# Effective Bandwidth
Here, devices have fixed capacity, but packet sizes are random. So, we still have a capacity constraint for the links, but we accept that it won’t be possible to ensure that we stay within those limits all the time, and seek instead to minimise the probability that the limits are exceeded, while keeping throughput as high as possible.
An important result is Chernoff’s Bound: $\mathbb{P}(Y\geq 0)\leq \inf_{s\geq 0}\mathbb{E}e^{sY}$. The proof is very straightforward: apply Markov’s inequality to the non-negative random variable $e^{SY}$. So in particular $\frac{1}{n}\log\mathbb{P}(X_1+\ldots+X_n\geq 0)\leq \inf M(s)$, where $M(s)=\log\mathbb{E}e^{sX}$, and Cramer’s Theorem asserts that after taking a limit in n on the LHS, equality holds, provided $\mathbb{E}X<0,\mathbb{P}(X>0)>0$.
We assume that the traffic has the form $S=\sum_{j=1}^J\sum_{i=1}^{n_j}X_{ji}$, where these summands are iid, interpreted as one of the $n_j$ loads used on source j. We have
$\log\mathbb{P}(S>c)\leq\log \mathbb{E}[e^{s(S-C)}]=\sum_{j=1}^Jn_jM_j(s)-sC$
so $\inf(\sum n_jM_j(s)-sC)\leq -\gamma\quad\Rightarrow\quad \mathbb{P}(s\geq C)\leq e^{-\gamma}$
so we want this to hold for large $\gamma$.
We might then choose to restrict attention to
$A=\{n:\sum n_jM_j-sC\leq-\gamma,\text{ some }s\geq 0\}$
So, when operating near capacity, say with call profile n* on (ie near) the boundary of A, with s* the argmin of the above. Then the tangent plane is $\sum n_jM_j(s^*)-s^*C=-\gamma$, and since A’s complement is convex, it suffices to stay on the ‘correct’ side (ie halfspace) of this tangent plane.
We can rewrite as $\sum n_jM_j(S^*)\leq C-\frac{\gamma}{s^*}$. Note that this is reasonable since s* is fixed, and we call $\frac{M_j(s)}{s}=:\alpha_j(s)$, the effective bandwidth. It is with respect to this average that we are bounding probabilities, hence ‘effective’.
Observe that $\alpha_j(s)$ is increasing by Jensen as $(\mathbb{E}e^X)^t\leq \mathbb{E}e^{tX}$ for t>1 implies that for t>s, $(\mathbb{E}e^{sX})^t\leq(\mathbb{E}e^{tX})^s$.
In particular,
$\mathbb{E}X\leq \alpha_j(s)\leq \text{ess sup}X$ | 2019-12-08 15:25:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 343, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8991281986236572, "perplexity": 245.63036359758448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540511946.30/warc/CC-MAIN-20191208150734-20191208174734-00334.warc.gz"} |
https://sites.google.com/site/dlhquantum/bell-s-theorem-and-quantum-realism | ### "Bell's Theorem and Quantum Realism" Correction
The following corrections are to be made to the book "Bell's Theorem and Quantum Realism: Reassessement in Light of the Schrödinger Paradox" ( See "Springer" website here: http://www.springer.com/physics/quantum+physics/book/978-3-642-23467-5?changeHeader)In chapter four, 4.3.1 ( pp 67 ), 4.3.2 ( 68-footnote 13 ), and 4.5.2 ( 90 ), the notation "script-M ( O)" is offered for experimental measurement procedure of quantum observable O.A more-appropriate notation "script-E ( O )" was utilized in chapter 2. ( And also makes a brief appearance at the end of chapter 4 in the book summary section 4.7.) The notion behind "script-E(O)" is to emphasize experimental procedure (of which there might be several distinct possibilites, even when one is "measuring" the same quantum observable. See chapter 2 of the book.) The appropriate "LaTeX" command for a "script" variable is "{\cal }". Ideally, one would also prefer not to utilize the term "measurement locality" (4.3.2 (p 68 footnote 13) and 4.5.2 (p 90 in text and in footnote 57) ). Instead, a better apellation would be "procedural locality," which term emphasizes the experimental procedure brought to bear in "measuring" some observable (perhaps making use of the initials PL rather than ML).For those who would like something more explicit and exact, below are attempts at corrected versions of the pages requiring changes. Unfortunately, I was not able to completely match the fonts, size and style used by the typesetter. Therefore, these pages are imperfect in that they do not match up perfectly with the pages in the actual book insofar as the flow of the text. In particular, the end-points of the pages are not the same as those in the book.Nevertheless, here are .jpg files of the pages in question, if you would care to download or just take a look:Page 67 The point of quantum contextuality is that measurement is an *ambiguous* concept. Moreover, this is not some special result that follows from analysis of hidden variables, but from the quantum formalism itself (see chapter 2 of the book). The notation "script-M( O )" and the term "measurement locality" do not really reflect this insight very well.I hope that any resulting confusion will be minimal.Thank you.DLH January 2012
Subpages (3): | 2015-05-05 22:35:24 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.87385094165802, "perplexity": 1839.007167047546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430457008123.23/warc/CC-MAIN-20150501051008-00057-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://samjshah.com/2011/02/04/multiple-integrals-jigga-wha/ | # Multiple Integrals! Jigga Wha?!
In Multivariable Calculus today, I let my kids loose. We are starting our chapter on multiple integrals, and I generally start out just dryly explaining what integration in higher dimensions might look like. But today, I decided to scrap that and have my kids try to see if they could generalize things themselves and come up with an idea of what integration in multivariable calculus would look like.
It was awesome. They immediately picked up on the fact that it would give you (signed) volume. That was great. They realized the xy-plane was equivalent to the x-axis. With some prompting, they understood we weren’t integrating over a 1D line (like between x=2 and x=5 on the x-axis), but now on a 2D region. (Of course, a little later, I explained that they could integrate over a line, but they’d get an area.)
Here’s the final list we generated.
It was nice, because students were coming up with some pretty complicated ideas on their own. They were motivating things we were going to be learning. Nice.
After we went through this thought exercise, still not looking at a single equation, I then threw the following up on the board:
I wanted to see if they could use our discussion to suss out some information about the notation, and the meaning behind it. They actually got that the limits 2/4 correspond with y and the 0/3 correspond with the x. And that the region we’re integrating over is a rectangle. And the surface we’re using is $4-2xy$. I mean, they got it.
I then showed them how to evaluate this double integral, briefly. I tried to get the why this works across to them, but we ran out of time and I slightly confused myself and got my explanation garbled. I promised that by the next class, I would fix things so they would totally get it.
Although not perfect (but good enough for me, for now), I whipped up this worksheet which I think attempts to make clear what is going on mathematically.
I strongly believe, however, that this will drive home the concept way better than I ever have done before. If you teach double integrals, this might come in handy.
PS. I, a la Silvanus P. Thompson in Calculus Made Easy, talk about dx and dy as “a little bit of x” and “a little bit of y.” So if you’re wondering what I’m looking for question 2 on p.2, I want students to say dy. Then the answer to A is $(\int_{0}^{1} x^2 e^y dx)*dy$. That’s the volume of one infinitely thin slice. Now for B, we have to add an infinity of these slices up, all the way from y=0 to y=2. Well, we know an integral sign is simply a fancy sign for summation, we so just have $\int_{0}^{2} (\int_{0}^{1} x^2 e^y dx)dy$
1. Just so you know, I am totes jealous of your worksheet-making capabilities. When you say, “I whipped up this worksheet,” I see something that would have taken me 5 15 hours to make. And such great formatting!
1. Oh, thanks! Making worksheets is easy for me (this one probably took me 30/35 minutes from start to finish?). It’s coming up with the idea behind the worksheet to get my kids from POINT A to POINT B that is tough.
(In this case, POINT A is knowing that a double integral somehow relates to volume… POINT B is actually understanding how the double integration works abstractly. The vehicle? Using a concrete example, but being gentle about it.)
2. Damn it. Failed strike-through joke with some bad HTML skills.
3. Elizabeth says:
Once again, this is really fantastic. Who’d have thought that a worksheet could generate such transcontinental excitement? :-) | 2021-07-31 07:48:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7570005059242249, "perplexity": 711.6815159045526}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154085.58/warc/CC-MAIN-20210731074335-20210731104335-00083.warc.gz"} |
https://chemistry.stackexchange.com/questions/9012/phase-stability-of-alcohols | # Phase stability of alcohols
Tert-butyl alcohol seems unusual among alcohols in that its melting point is high (25°C) while its boiling point is also still low (82°C). I am looking for more materials with phase-unstable liquid regions like this so I'm curious what makes the liquid phase so unstable relative to the solid and gas phases?
Do the methyl groups align and act like alkanes to stabilize the solid but the molecule is still small enough to have a low boiling point?
To my larger point, any advice on characteristics of materials that exhibit low liquid phase stability?
Chromium, molybdenum, and tungsetn metal versus their respective hexacarbonyls. $\ce{-SiMe3}$ and $\ce{-CF3}$ plus symmetry confer remarkable volatility. $\ce{I(CF3)7}$ melts and boils around 0 C. That is molecular weight 609.95 - and it has no static molecular structure (Bartell mechanism). | 2020-02-27 06:08:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43905457854270935, "perplexity": 2554.9281708499334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146647.82/warc/CC-MAIN-20200227033058-20200227063058-00427.warc.gz"} |
https://tex.stackexchange.com/questions/191848/centering-list-of-x-and-following-with-text/191850 | # Centering List of X and Following with Text
With the tocloft package, I can add text under the title of a "List of Z" at the beginning of a latex document:
\renewcommand{\cftafterZtitle}{\par\noindent \textnormal{Z} \hfill \textnormal{PAGE}}
Using the following I can center the title "List of Z":
\renewcommand{\cftZtitlefont}{\hfill\bfseries}
\renewcommand{\cftafterZtitle}{\hfill}
But when I try to combine the two commands, it moves the text to the right margin instead of the center:
\renewcommand{\cftloftitlefont}{\hfill\bfseries}
\renewcommand{\cftafterloftitle}{\hfill\par\noindent \textnormal{Z} \hfill \textnormal{PAGE}}
Does anyone know how to both center the title "List of Z" and have text below the title?
• Please help us to help you and add a minimal working example (MWE) that illustrates your problem. It will be much easier for us to reproduce your situation and find out what the issue is when we see compilable code, starting with \documentclass{...} and ending with \end{document}. – cfr Jul 17 '14 at 1:59
• Don't know the package but usually centring is done with things like \centering or \begin{center} rather than \hfill which does not usually make a good solution. – cfr Jul 17 '14 at 2:00
You can add an empty \hbox after the second \hfill (See egreg's answer to What is \null and when do we need to use it?):
\documentclass{article}
\usepackage{tocloft}
\renewcommand{\cftloftitlefont}{\hfill\bfseries}
\renewcommand{\cftafterloftitle}{\hfill\null\par\noindent\textnormal{Z}\hfill \textnormal{PAGE}}
\begin{document}
\listoffigures
\noindent X\hrulefill Y% for coparison only
\end{document}
Another option is to use \hfil instead:
\renewcommand{\cftloftitlefont}{\hfil\bfseries}
\renewcommand{\cftafterloftitle}{\hfil\par\textnormal{Z}\hfill \textnormal{PAGE}} | 2020-06-03 23:35:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8906961679458618, "perplexity": 1850.7375715033286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347436466.95/warc/CC-MAIN-20200603210112-20200604000112-00589.warc.gz"} |
https://zbmath.org/?q=an%3A0337.13011 | ## Projective modules over polynomial rings.(English)Zbl 0337.13011
### MathOverflow Questions:
Are finite projective modules over $$R[t]$$ free when $$R$$ is DVR?
### MSC:
13C10 Projective and free modules and ideals in commutative rings 13F20 Polynomial rings and ideals; rings of integer-valued polynomials 13D15 Grothendieck groups, $$K$$-theory and commutative rings
Full Text:
### References:
[1] Bass, H.: Some problems in ?classical? algebraicK-theory, AlgebraicK-theory II. Lecture Notes in Math.342 pp. 3-73. Berlin-Heidelberg-New York: Springer 1973 [2] Bass, H.: Libération des modules projectifs sur certains anneaux de polynômes. Séminaire Bourbaki, 1973/74, n0 448. Lecture Notes in Math.431, pp. 228-254. Berlin-Heidelberg-New York: Springer 1975 [3] Horrocks, G.: Projective modules over an extension of local ring. Proc. London Math. Soc.14(3), 714-718 (1964) · Zbl 0132.28103 [4] Murthy, M.P.: ProjectiveA[x]-modules. Jour. London Math. Soc.41, 453-456 (1966) · Zbl 0142.01001 [5] Serre, J.P.: Faisceaux algébriques cohérents. Ann. Math.61, 197-278 (1955) · Zbl 0067.16201
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2022-07-01 19:30:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3944774270057678, "perplexity": 6444.92508594271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103945490.54/warc/CC-MAIN-20220701185955-20220701215955-00219.warc.gz"} |
https://mattermodeling.stackexchange.com/questions/4138/are-different-eigensolvers-consistent-within-vasp-algo-normal-vs-fast | Are different eigensolvers consistent within VASP (Algo=Normal vs Fast)
I tried to relax a 4x4x1 supercell of ferromagnetic monolayer material using the default settings (ALGO = Normal) but it didn't converge. So, I switched to ALGO = Fast and the results are converging normally now. Is this setting safe? Will this affect the accuracy of the results? The used INCAR file is below :
ENCUT = 600 eV
PREC = Accurate
LREAL = Auto
EDIFFG = -0.001
EDIFF = 1E-8
LCHARG = .FALSE.
LWAVE = .FALSE.
ISMEAR = 0
SIGMA = 0.03
NSW = 299
IBRION = 2
ISIF = 3
ISPIN = 2
MAGMOM = 16*2.0 32*0.0
ALGO = Fast # This was Normal before editing
#Mixer
AMIX = 0.2
BMIX = 0.00001
AMIX_MAG = 0.8
BMIX_MAG = 0.00001
LASPH = .TRUE.
NCORE = 2
Changing ALGO should make no difference in an ideal world. However, when you invoke spin polarization, you may find a different magnetic state from both algorithms. The best practice would be to ensure that you converge to the right solution.
That being said, the NORMAL algo is normally more robust than Fast. This in general might be a bad sign for your system. You can also try the ALL algo and see what that gives.
I see you have also added an incar, here is some general advice that might influence convergence.
• ADDGRID is a spooky keyword, I would say never use it but sometimes it helps. I suggest leaving it off, convergence issues tend not to be fixed by it in general in these magnetic systems
• You are using a small sigma value, I would suggest using 0.2 and reduce it to your desired value after convergence. It will not influence geometry much but will make convergence much easier.
• Leave the mixing settings at their defaults most of the time. You can try this approach as a first attempt, but if it doesn't fix the problem do not keep it.
• Consider running a spin paired calculation first as a single point calculation, save the WAVECAR/CHGCAR, then add magnetization. This often helps as well.
• EDIFF = 1e-8 is insanely accurate, use something more like 1e-4 or 1e-5 for geometry optimization. If you find that you cannot converge the geometry you can raise it or switch to a VTST geometry optimizer which uses forces which are less sensitive to this.
• Good point about the magnetic moments. That's a real subtlety. – Andrew Rosen Jan 15 at 18:57
• It has given me many angry noises at my desk in the past few days – Tristan Maxson Jan 15 at 19:02
• I have added my iNCAR file could you please have a look at it? Don't you think that the mixing tags are the source of the problem? – Chi Kou Jan 15 at 19:04
• @ChiKou I have added some additional advice based on prior experience. Maybe some can be applied. – Tristan Maxson Jan 15 at 19:14 | 2021-03-08 20:22:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44507643580436707, "perplexity": 1608.3807677948005}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385389.83/warc/CC-MAIN-20210308174330-20210308204330-00376.warc.gz"} |
https://www.doubtnut.com/question-answer/if-i-int-sin-2x-3-4cosx3dx-then-i-equals-642546099 | # If I= int (sin 2x)/((3+4cosx)^(3))dx, then I equals
Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams.
Updated On: 3-5-2021
Apne doubts clear karein ab Whatsapp par bhi. Try it now.
Get Answer to any question, just click a photo and upload the photo
and get the answer completely free,
Watch 1000+ concepts & tricky questions explained!
Text Solution
(3cos x+8)/((3+4cosx)^(2))+C(3+8cos x)/(16(3+4cosx)^(2))+C(3+cos x)/((3+4cosx)^(2))+C(3-8cos x)/(16(3+4cosx)^(2))+C
B
Solution :
I= int (sin 2x)/((3+4cosx)^(3))dx <br> Put 3+4cosx=t," so that " -4sinx dx=dt. Then <br> I=(-1)/(8)int((t-3))/(t^(3))dt=(1)/(8)((1)/(t)-(3)/(2)(1)/(t^(2)))+C <br> =(2t-3)/(16t^(2))=(8cosx+3)/(16(3+4cosx)^(2))+C
Transcript
TimeTranscript
00:00 - 00:59welcome to doctor days is called the question I to integration sin 2x upon 3 + 4 cos x dx is equal to equal to integration to sin x cos x upon 1 + 4 cos x + 4 cos x = 25 - 4 sin x dx is equal to duty so we can purchase value above so it become to outside this is -1 by School this value becomes minus 3 by 4 upon this becomes
01:00 - 01:59- 12 integration t minus 3 upon 4 cube minus 1 by 2 integration of 1 by 4 p square DTE energy + 3 by 2 installation and bi fore u so this becomes minus 1 by 8 management + kids for minus 1 + 3 by 2 into 4 into 10 power minus 2 1 -2 School Tum so we have won by 80 upon 16 p square +
02:00 - 02:592 x minus 3 X square + so heavy put the value of cos x minus 16 cos square x + cos square x + 3 k we get the value of 4 cos x + 3 so we put it here it becomes 8cosx | 2022-01-20 11:53:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5233868360519409, "perplexity": 3980.970235799485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301737.47/warc/CC-MAIN-20220120100127-20220120130127-00144.warc.gz"} |
https://www.nature.com/articles/nmat4456?error=cookies_not_supported&code=af356b90-691a-4e41-9613-32bedce77b79 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Observation of superconductivity induced by a point contact on 3D Dirac semimetal Cd3As2 crystals
## Abstract
Three-dimensional (3D) Dirac semimetals, which possess 3D linear dispersion in the electronic structure as a bulk analogue of graphene, have lately generated widespread interest in both materials science and condensed matter physics1,2. Recently, crystalline Cd3As2 has been proposed and proved to be a 3D Dirac semimetal that can survive in the atmosphere3,4,5,6,7,8,9. Here, by using point contact spectroscopy measurements, we observe exotic superconductivity around the point contact region on the surface of Cd3As2 crystals. The zero-bias conductance peak (ZBCP) and double conductance peaks (DCPs) symmetric around zero bias suggest p-wave-like unconventional superconductivity. Considering the topological properties of 3D Dirac semimetals, our findings may indicate that Cd3As2 crystals under certain conditions could be topological superconductors10,11,12,13, which are predicted to support Majorana zero modes or gapless Majorana edge/surface modes in the boundary depending on the dimensionality of the material14,15,16,17.
This is a preview of subscription content, access via your institution
## Relevant articles
• ### Valence-skipping and quasi-two-dimensionality of superconductivity in a van der Waals insulator
Nature Communications Open Access 14 November 2022
• ### Growth of α-Sn on silicon by a reversed β-Sn to α-Sn phase transformation for quantum material integration
Communications Materials Open Access 05 April 2022
• ### Emergence of topological superconductivity in doped topological Dirac semimetals under symmetry-lowering lattice distortions
Scientific Reports Open Access 17 September 2021
## Access options
\$32.00
All prices are NET prices.
## References
1. Young, S. M. et al. Dirac semimetal in three dimensions. Phys. Rev. Lett. 108, 140405 (2012).
2. Wang, Z. J., Weng, H. M., Wu, Q. S., Dai, X. & Fang, Z. Three-dimensional Dirac semimetal and quantum transport in Cd3As2 . Phys. Rev. B 88, 125427 (2013).
3. Liu, Z. K. et al. A stable three-dimensional topological Dirac semimetal Cd3As2 . Nature Mater. 13, 677–681 (2014).
4. Neupane, M. et al. Observation of a three-dimensional topological Dirac semimetal phase in high-mobility Cd3As2 . Nature Commun. 5, 3786 (2014).
5. Borisenko, S. et al. Experimental realization of a three-dimensional Dirac semimetal. Phys. Rev. Lett. 113, 027603 (2014).
6. Jeon, S. et al. Landau quantization and quasiparticle interference in the three-dimensional Dirac semimetal Cd3As2 . Nature Mater. 13, 851–856 (2014).
7. He, L. P. et al. Quantum transport in the three-dimensional Dirac semimetal Cd3As2 . Phys. Rev. Lett. 113, 246402 (2014).
8. Tian, L. et al. Ultrahigh mobility and giant magnetoresistance in Cd3As2: Protection from backscattering in a Dirac semimetal. Nature Mater. 14, 280–284 (2015).
9. Zhao, Y. F. et al. Anisotropic Fermi surface and quantum limit transport in high mobility 3D Dirac semimetal Cd3As2 . Phys. Rev. X 5, 031037 (2015).
10. Hasan, M. Z. & Kane, C. L. Colloquium: Topological insulators. Rev. Mod. Phys. 82, 3045–3067 (2010).
11. Qi, X.-L. & Zhang, S.-C. Topological insulators and superconductors. Rev. Mod. Phys. 83, 1057–1110 (2011).
12. Alicea, J. New directions in the pursuit of Majorana fermions in solid state systems. Rep. Prog. Phys. 75, 076501 (2012).
13. Ryu, S., Schnyder, A. P., Furusaki, A. & Ludwig, A. W. W. Topological insulators and superconductors: Ten-fold way and dimensional hierarchy. New J. Phys. 12, 065010 (2010).
14. Read, N. & Green, D. Paired states of fermions in two dimensions with breaking of parity and time-reversal symmetries and the fractional quantum Hall effect. Phys. Rev. B 61, 10267–10297 (2000).
15. Kitaev, A. Y. Unpaired Majorana fermions in quantum wires. Phys. Usp. 44, 131–136 (2001).
16. Ivanov, D. A. Non-Abelian statistics of half-quantum vortices in p-wave superconductors. Phys. Rev. Lett. 86, 268–271 (2001).
17. Fu, L. & Kane, C. L. Superconducting proximity effect and Majorana fermions at the surface of a topological insulator. Phys. Rev. Lett. 100, 096407 (2008).
18. Nayak, C., Simon, S. H., Stern, A., Freedman, M. & Das Sarma, S. Non-Abelian anyons and topological quantum computation. Rev. Mod. Phys. 80, 1083–1159 (2008).
19. Alicea, J., Oreg, Y., Refael, G., Oppen, F. v. & Fisher, M. P. A. Non-Abelian statistics and topological quantum information processing in 1D wire networks. Nature Phys. 7, 412–417 (2011).
20. Hor, Y. S. et al. Superconductivity in CuxBi2Se3 and its implications for pairing in the undoped topological insulator. Phys. Rev. Lett. 104, 057001 (2010).
21. Wray, L. A. et al. Observation of topological order in a superconducting doped topological insulator. Nature Phys. 6, 855–859 (2010).
22. Sasaki, S. et al. Topological superconductivity in CuxBi2Se3 . Phys. Rev. Lett. 107, 217001 (2011).
23. Levy, N. et al. Experimental evidence for s-wave pairing symmetry in superconducting CuxBi2Se3 single crystals using a scanning tunneling microscope. Phys. Rev. Lett. 110, 117001 (2013).
24. Mourik, V. et al. Signatures of Majorana fermions in hybrid superconductor–semiconductor nanowire devices. Science 336, 1003–1007 (2012).
25. Deng, M. T. et al. Observation of Majorana fermions in a Nb-InSb nanowire-Nb hybrid quantum device. Nano Lett. 12, 6414–6419 (2012).
26. Das, A. et al. Zero-bias peaks and splitting in an Al-InAs nanowire topological superconductor as a signature of Majorana fermions. Nature Phys. 8, 887–895 (2012).
27. Liu, J., Potter, A. C., Law, K. T. & Lee, P. A. Zero-bias peaks in the tunneling conductance of spin-orbit-coupled superconducting wires with and without Majorana end-states. Phys. Rev. Lett. 109, 267002 (2012).
28. Nadj-Perge, S. et al. Observation of Majorana fermions in ferromagnetic atomic chains on a superconductor. Science 346, 602–607 (2014).
29. Laube, F., Goll, G., Löhneysen, H. v., Fogelström, M. & Lichtenberg, F. Spin-triplet superconductivity in Sr2RuO4 probed by Andreev reflection. Phys. Rev. Lett. 84, 1595–1598 (2000).
30. Kashiwaya, S., Kashiwaya, H., Saitoh, K., Mawatari, Y. & Tanaka, Y. Tunneling spectroscopy of topological superconductors. Physica E 55, 25–29 (2014).
31. Kashiwaya, S. & Tanaka, Y. Tunneling effects on surface bound states in unconventional superconductors. Rep. Prog. Phys. 63, 1641–1724 (2000).
32. Sheet, G., Mukhopadhyay, S. & Raychaudhuri, P. Role of critical current on the point-contact Andreev reflection spectra between a normal metal and a superconductor. Phys. Rev. B 69, 134507 (2004).
33. Aggarwal, L. et al. Unconventional superconductivity at mesoscopic point contacts on the 3D Dirac semimetal Cd3As2 . Nature Mater. http://dx.doi.org/10.1038/nmat4455 (2015).
34. Daghero, D. & Gonnelli, R. S. Probing multiband superconductivity by point-contact spectroscopy. Supercond. Sci. Technol. 23, 043001 (2010).
35. Deutscher, G. Andreev–Saint-James reflections: A probe of cuprate superconductors. Rev. Mod. Phys. 77, 109–135 (2005).
36. Blonder, G. E., Tinkham, M. & Klapwijk, T. M. Transition from metallic to tunneling regimes in superconducting microconstrictions: Excess current, charge imbalance, and supercurrent conversion. Phys. Rev. B 25, 4515–4532 (1982).
## Acknowledgements
We acknowledge C. Zhang, F. Yang, Y. Xing and Y. Liu for help with experiments. This work was financially supported by the National Basic Research Program of China (Grant Nos. 2013CB934600, 2015CB921102, 2012CB921300, 2012CB927400), the National Natural Science Foundation of China (Nos. 11222434, 11174007, 11534001, 11574008), and the Research Fund for the Doctoral Program of Higher Education (RFDP) of China.
## Author information
Authors
### Contributions
J.Wang and J.Wei conceived the experiments. He Wang, Huichao Wang and W.Y. carried out transport measurements. Haiwen Liu, X.-J.L. and X.C.X. performed the theoretical interpretation. Hong Lu and S.J. grew the crystals.
### Corresponding authors
Correspondence to Xiong-Jun Liu, Jian Wei or Jian Wang.
## Ethics declarations
### Competing interests
The authors declare no competing financial interests.
## Supplementary information
### Supplementary Information
Supplementary Information (PDF 756 kb)
## Rights and permissions
Reprints and Permissions
Wang, H., Wang, H., Liu, H. et al. Observation of superconductivity induced by a point contact on 3D Dirac semimetal Cd3As2 crystals. Nature Mater 15, 38–42 (2016). https://doi.org/10.1038/nmat4456
• Accepted:
• Published:
• Issue Date:
• DOI: https://doi.org/10.1038/nmat4456
• ### Growth of α-Sn on silicon by a reversed β-Sn to α-Sn phase transformation for quantum material integration
• Shang Liu
• Alejandra Cuervo Covian
• Jifeng Liu
Communications Materials (2022)
• ### Valence-skipping and quasi-two-dimensionality of superconductivity in a van der Waals insulator
• Caorong Zhang
• Junwei Huang
• Hongtao Yuan
Nature Communications (2022)
• ### Emergence of topological superconductivity in doped topological Dirac semimetals under symmetry-lowering lattice distortions
• Sangmo Cheon
• Ki Hoon Lee
• Bohm-Jung Yang
Scientific Reports (2021)
• ### Cycling Fermi arc electrons with Weyl orbits
• Cheng Zhang
• Yi Zhang
• Faxian Xiu
Nature Reviews Physics (2021)
• ### Fermi-arc supercurrent oscillations in Dirac semimetal Josephson junctions
• Cai-Zhen Li
• An-Qi Wang
• Zhi-Min Liao
Nature Communications (2020) | 2022-12-07 14:56:43 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8147051334381104, "perplexity": 9013.999806462904}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711162.52/warc/CC-MAIN-20221207121241-20221207151241-00485.warc.gz"} |
https://skyandtelescope.org/online-gallery/3-years-saturn/ | Photographer:
Peter Wienerroither
Email:
peter.wienerroither@univie.ac.at
Location of Photo:
near Vienna, Austria
3/13/2007
Equipment:
Canon EOS 5D, Sigma 50mm Macro, mount Astro 5. Exposure 4x 4 min. at ISO 400.
Description:
A photo/graph that shows the way of Saturn through Cancer and Leo from Aug. 2005 until Sep. 2008 in steps at 1st and 15th of each month. A animated GIF see at http://homepage.univie.ac.at/~pw/pwafop/20070313-004d.gif
Comments
You must be logged in to post a comment. | 2020-07-06 02:17:58 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216764330863953, "perplexity": 14944.211193896297}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890092.28/warc/CC-MAIN-20200706011013-20200706041013-00353.warc.gz"} |
https://hal.archives-ouvertes.fr/hal-00911140 | # Improvements on the accelerated integer GCD algorithm
1 Computer Science Institute, University of Oran Es-Senia. Algeria
LIPN - Laboratoire d'Informatique de Paris-Nord
Abstract : The present paper analyses and presents several improvements to the algorithm for finding the $(a,b)$-pairs of integers used in the $k$-ary reduction of the right-shift $k$-ary integer GCD algorithm. While the worst-case complexity of Weber's ''Accelerated integer GCD algorithm'' is $\cO\l(\log_\phi(k)^2\r)$, we show that the worst-case number of iterations of the while loop is exactly $\tfrac 12 \l\lfloor \log_{\phi}(k)\r\rfloor$, where $\phi := \tfrac 12 \l(1+\sqrt{5}\r)$.\par We suggest improvements on the average complexity of the latter algorithm and also present two new faster residual algorithms: the sequential and the parallel one. A lower bound on the probability of avoiding the while loop in our parallel residual algorithm is also given.
Keywords :
Document type :
Journal articles
Domain :
Cited literature [7 references]
https://hal.archives-ouvertes.fr/hal-00911140
Contributor : Christian Lavault <>
Submitted on : Monday, February 10, 2014 - 4:19:10 PM
Last modification on : Thursday, February 7, 2019 - 5:53:12 PM
Document(s) archivé(s) le : Saturday, May 10, 2014 - 11:15:10 PM
### Files
Improvtsgcd97.pdf
Files produced by the author(s)
### Identifiers
• HAL Id : hal-00911140, version 1
• ARXIV : 1402.2266
### Citation
Sidi Mohamed Sedjelmaci, Christian Lavault. Improvements on the accelerated integer GCD algorithm. Information Processing Letters, Elsevier, 1997, 61 (1), pp.31--36. ⟨hal-00911140⟩
Record views | 2019-04-23 00:18:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3543654680252075, "perplexity": 3990.2924207792867}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578583000.29/warc/CC-MAIN-20190422235159-20190423021159-00201.warc.gz"} |
http://stevekifowit.com/archives/Trig_Notes/sec5_2.html | # Section 5.2 Logarithmic Functions
Section Objectives
1. Evaluate logarithmic functions.
2. Graph logarithmic functions.
3. Use properties of logarithms to simplify expressions.
### Logarithmic Functions
The logarithmic functions are the inverses of the exponential functions.
To more specific...
Let $a$ be a fixed positive real number not equal to 1. The logarithmic function with base-$a$, denoted $\log_a x$, is the inverse of the base-$a$ exponential function. That is,
#### Examples
• $\log_2 1024 = 10$ because $2^{10} = 1024$.
• $\log_{10} 1000 = 3$ because $10^3=1000$.
• Can you find two consecutive positive integers that bound $\log_3 20$?
• Your calculator should compute base-10 logarithms, often called common logs. Use your calculator to compute $\log_{10} 37=\log 37$.
### Properties of the Logarithmic Functions
Because the logs and exponentials are inverses, we must have:
• $\log_a (a^x) = x$ for any real number $x$
• $a^{\log_a x} = x$ for any positive real number $x$
#### Examples
• $\log_5 5^8 = 8$
• $10^{\log 15} = 15$
In general, the logarithmic functions have the following properties.
#### $f(x)=\log_a x, a>1$
• Continuous and increasing
• One-to-one ( Graph passes the horizontal line test.)
• Domain: $(0, +\infty)$, i.e., all positive real numbers
• Range: $(-\infty,+\infty)$, i.e., all real numbers
• $x=0$ is a vertical asymptote of the graph.
• $(1,0)$ is the only $x$-intercept of the graph.
• $(a,1)$ is a point on the graph.
• $f(x) \to \infty$ as $x \to \infty$, but it does so slowly.
#### $f(x)=\log_a x, 0
• Continuous and decreasing
• One-to-one ( Graph passes the horizontal line test.)
• Domain: $(0, +\infty)$, i.e., all positive real numbers
• Range: $(-\infty,+\infty)$, i.e., all real numbers
• $x=0$ is a vertical asymptote of the graph.
• $(1,0)$ is the only $x$-intercept of the graph.
• $(a,1)$ is a point on the graph.
• $f(x) \to -\infty$ as $x \to \infty$, but it does so slowly.
#### Examples
• Discuss the graph of $y=\log_3 x$.
• Discuss the graph of $y=\log_{2/3} x$.
• Discuss the graph of $y=1+\log_2 (x-4)$.
### The Natural Logarithm
The base-$e$ logarithm is called the natural logarithm:
Your scientific calculator has built-in functions to compute base-10 and base-$e$ exponentials and logarithms.
### Using the Properties of Logs
The properties of logarithms can be very useful when evaluating expressions and solving equations.
#### Examples
• Solve for $x$: $\quad \log(2x+1)=\log 3x$
• Solve for $x$: $\quad \log_4(x^2-6)=\log_4 10$
• Use the properties of logs to evaluate $\log_2 \frac{1}{8}$.
• Use the properties of logs to evaluate $\log_3 \sqrt{9}$.
• Use the properties of logs to evaluate $\displaystyle \ln \frac{1}{e^2}$. | 2019-12-06 15:29:20 | {"extraction_info": {"found_math": true, "script_math_tex": 46, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9908148050308228, "perplexity": 3058.956236579146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540488870.33/warc/CC-MAIN-20191206145958-20191206173958-00364.warc.gz"} |
http://www.ck12.org/physics/Centripetal-Acceleration/lesson/user:Yi50eXNvbi5ncm92ZXJAZ21haWwuY29t/Centripetal-Acceleration/r1/ | <meta http-equiv="refresh" content="1; url=/nojavascript/"> Centripetal Acceleration ( Read ) | Physics | CK-12 Foundation
# Centripetal Acceleration
%
Best Score
Practice Centripetal Acceleration
Best Score
%
Centripetal Acceleration
0 0 0
Students will learn what centripetal acceleration is, where it applies and how to calculate it. Students will also learn when a force is acting as a centripetal force and how to apply it.
### Key Equations
Centripetal Force
$F_C = \frac{mv^2}{r} \begin{cases}m & \text{mass (in kilograms, kg)}\\v & \text{speed (in meters per second, m/s}\text{)}\\r & \text{radius of circle}\end{cases}$
Centripetal Acceleration
$a_C = \frac{v^2}{r} \begin{cases}v & \text{speed (in meters per second, m/s}\text{)}\\r & \text{radius of circle}\end{cases}$
Guidance
If a mass $m$ is traveling with velocity $\vec{v}$ and experiences a centripetal --- always perpendicular --- force $\vec{F_c}$ , it will travel in a circle of radius
$r = \frac{m v^2}{|\vec{F}|} \text{ [1]}\intertext{Alternatively, to keep this mass moving at this velocity in a circle of this radius, one needs to apply a centripetal force of}\vec{F_c} = \frac{mv^2}{r} \text{ [2]}\intertext{By Newton's Second Law, this is equivalent to a centripetal acceleration of:}\vec{F_c} =\cancel{m}\vec{a_c} = \cancel{m}\frac{v^2}{r} \text{ [3]}$
#### Example 1
If you are 4m from the center of a Merry-Go-Round that is rotating at 1 revolution every 2 seconds, what is your centripetal acceleration?
##### Solution
First we need to find your tangential velocity. We can do this using the given angular velocity.
$\omega&=\frac{2\pi\text{ rad}}{2\text{ s}}\\\omega&=\pi\text{ rad/s}\\\omega&=\frac{v}{r}\\v&=\omega r\\v&=\pi\;\text{rad/s}*4\;\text{m}\\v&=4\pi\;\text{m/s}$
$a_c&=\frac{v^2}{r}\\a_c&=\frac{(4\pi\;\text{m/s})^2}{4\;\text{m}}\\a_c&=4\pi^2\;\text{m/s}^2\\$
### Time for Practice
1. A 6000 kg roller coaster goes around a loop of radius 30m at 6 m/s. What is the centripetal acceleration?
2. For the Gravitron ride above, assume it has a radius of 18 m and a centripetal acceleration of 32 m/s 2 . Assume a person is in the graviton with 180 cm height and 80 kg of mass. What is the speed it is spinning at? Note you may not need all the information here to solve the problem.
1. 1.2 m/s 2
2. 24 m/s | 2014-09-30 16:16:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 8, "texerror": 0, "math_score": 0.576863706111908, "perplexity": 1048.761470316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663036.27/warc/CC-MAIN-20140930004103-00488-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://www.flapw.de/MaX-5.1/documentation/semicoreLOs/ | ## Describing semicore states with local orbitals
If the treatment of a semicore state as a core electron leads to a ghost band the user typically resolves this issue by switching the treatment of these respective electrons to a valence electron treatment. This procedure involves several steps that have to be performed in a consistent way.
The starting point is the identification of the responsible semicore states. This is done by identifying those core electron states with the highest eigenenergies.
For each atom type the eigenenergies of the core electron states are provided in the coreStates elements of the out.xml file. The number of core electrons lost from the respective MT sphere is also listed in this tag. An example for such an output is provided below. It features $3p1/2$ and $3p3/2$ states with very high lying eigenenergies.
<coreStates atomType=" 1" atomicNumber=" 23" spin="1"
kinEnergy=" 941.0636778793" eigValSum=" -550.9805159504"
lostElectrons=" 0.111893">
<state n="1" l="0" j="0.5" energy="-195.9388108744" weight="2.000"/>
<state n="2" l="0" j="0.5" energy="-21.4169509005" weight="2.000"/>
<state n="2" l="1" j="0.5" energy="-17.9790788510" weight="2.000"/>
<state n="2" l="1" j="1.5" energy="-17.7189379876" weight="4.000"/>
<state n="3" l="0" j="0.5" energy="-1.8913495403" weight="2.000"/>
<state n="3" l="1" j="0.5" energy="-0.9625902671" weight="2.000"/>
<state n="3" l="1" j="1.5" energy="-0.9318007834" weight="4.000"/>
</coreStates>
After identifying the core electron states to be moved to the valence description the number of electrons in these states has to be counted. This is done by multiplying for each state the electrons in it by the number of atoms in the respective atom group and adding these numbers up for all considered states of all considered atom groups.
For each atom type and each state the number of core electrons per atom is provided in coreStates/state/@weight. In general these are 2 electrons for $s$ states, 6 electrons for $p$ states, 10 electrons for $d$ states, and 14 electrons for $f$ states if spin-orbit splitting is neglected.
To move the description of the semicore electrons from the core electrons to the valence electrons the respective core electron states have to be removed in the input file and the number of valence electrons has to be increased.
In the electron configuration as provided in an electronConfig tag the respective states listed in atomSpecies/species/electronConfig/coreConfig have to be moved directly to the section of the valence electrons in atomSpecies/species/electronConfig/valenceConfig. The number of valence electrons is specified in calculationSetup/bzIntegration/@valenceElectrons. It has to be adapted even if the electron configuration is specified directly.
The last step is the extension of the LAPW basis by local orbitals (LOs). For this you have to consider the main quantum number of the semicore states and the orbital character.
For the addition of semicore LOs (SCLOs) a new lo tag has to be inserted in the atomSpecies/species section. All of these tags have to be at the end of the section. The tag involves the specification of the LO type in atomSpecies/species/lo/@type. For the description of semicore states this has to be set to SCLO. It specifies details of the LO energy parameter calculation procedure. SCLO extrapolates the spherical effective MT potential by a confining potential outside the MT sphere, considers an atomic problem with this potential, and uses the eigenenergy related to the specified main quantum number and angular momentum quantum number as LO energy parameter. The main quantum number is specified in atomSpecies/species/lo/@n and the angular momentum quantum number in atomSpecies/species/lo/@l. The last parameter that has to be specified in this tag is the degree of the energy derivative of the solution to the atomic problem. This is specified in atomSpecies/species/lo/@eDeviv. For the most common usage of the function $u_{l}^\alpha(r_\alpha,E_{l}^\text{lo})$ this has to be set to 0. If higher order energy derivatives of the function have to be used the respective degree of the derivative has to be specified here. An example for the specififaction of $3p$ semicore LOs is shown below.
<lo type="SCLO" l="1" n="3" eDeriv="0"/> | 2021-09-23 23:02:13 | {"extraction_info": {"found_math": true, "script_math_tex": 8, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36533400416374207, "perplexity": 1357.7582702152404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057479.26/warc/CC-MAIN-20210923225758-20210924015758-00492.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=1989_USAMO_Problems/Problem_2&oldid=22403 | # 1989 USAMO Problems/Problem 2
## Problem
The 20 members of a local tennis club have scheduled exactly 14 two-person games among themselves, with each member playing in at least one game. Prove that within this schedule there must be a set of 6 games with 12 distinct players
## Solution 1
Consider a graph with $20$ vertices and $14$ edges. The sum of the degrees of the vertices is $28$; by the Pigeonhole Principle at least $12$ vertices have degrees of $1$ and at most $8$ vertices have degrees greater than $1$. If we keep deleting edges of vertices with degree greater than $1$ (a maximum of $8$ such edges), then we are left with at least $6$ edges, and all of the vertices have degree either $0$ or $1$. These $6$ edges represent the $6$ games with $12$ distinct players.
## Solution 2
\indent Let a slot be a place we can put a member in a game, so there are two slots per game, and 28 slots total. We begin by filling exactly 20 slots each with a distinct member since each member must play at least one game. Let there be $m$ games with both slots filled and $n$ games with only one slot filled, so $2m+n=20$. Since there are only 14 games, $m+n \leq 14 \Longrightarrow 2m+n \leq 14+m \Longleftrightarrow 20 \leq 14+m \Longrightarrow m \geq 6$, so there must be at least 6 games with two distinct members each, and we must have our desired set of 6 games. | 2023-01-29 01:35:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35122886300086975, "perplexity": 184.51590416291745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499697.75/warc/CC-MAIN-20230129012420-20230129042420-00315.warc.gz"} |
https://www.maths.usyd.edu.au/s/scnitm/lachlans-SydneyDynamicsGroup-Burns | SMS scnews item created by Lachlan Smith at Mon 10 Feb 2020 1713
Type: Seminar
Distribution: World
Expiry: 13 Mar 2020
Calendar1: 13 Feb 2020 1600-1700
CalLoc1: Carslaw 175
Auth: lachlans@105.66.233.220.static.exetel.com.au (lsmi9789) in SMS-WASM
# Sydney Dynamics Group: Burns -- Flexible spectral methods and high-level programming for PDEs
Dear All,
This week, Thursday February 13, Keaton Burns (MIT) will give a talk at USyd
in Carslaw 175 (note unusual time and place), at 4pm on
Title: Flexible spectral methods and high-level programming for PDEs
Abstract:
The large-scale numerical solution of PDEs is an essential part of scientific
research. Decades of work have been put into developing fast numerical schemes for
specific equations, but computational research in many fields is still largely
software-limited. Here we will discuss how algorithmic flexibility and composability
can enable new science, as illustrated by the Dedalus Project. Dedalus is an
open-source Python framework that automates the solution of general PDEs using spectral
methods. High-level abstractions allow users to symbolically specify equations,
parallelize and scale their solvers to thousands of cores, and perform arbitrary
analysis with the computed solutions. These features are enabling us to perform novel
simulations of astrophysical, geophysical, and biological fluids with modern
mathematical techniques. We will discuss applications using new bases for tensor-valued
equations in spherical domains, immersed boundary methods for multiphase flows, and
multi-domain simulations interfacing Dedalus with other PDE and integral equation
solvers.
Hope to see you all there, Lachlan
Actions: | 2022-12-07 10:11:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3040563762187958, "perplexity": 11291.822140343957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00284.warc.gz"} |
https://www.nifty-site-manager.com/docs/fns/cpy.html | fn: cpy
[contents]
#### Syntax
The syntax for cpy calls is:
f++:
cpy(sources, target)
cpy sources target
n++:
@cpy(sources, target)
@cpy sources target
#### Description
cpy is the copy function, it copies the files and/or directories specified by the sources parameters to the target specified in the trailing parameter. For more than one source the target should be an existing dirctory, for a single source the target can be either an existing directory or a file to copy to.
Note: Paths can be unoquoted, single quoted or double quoted.
Note: You should also be able to use the copy function for the underlying shell as well, typically copy on Windows and cp on other platforms like FreeBSD, Linux, OSX, etc..
Note: Nift will skip to the first non-whitespace (ie. to the first character that is not a space, tab or newline) after a cpy call and inject it to the output file where the call started. If you want to prevent Nift from doing this put a '!' after the call, eg.:
@cpy dir1 dir2;!
@cp(dir1, dir2)!
#### Options
The following options are available for cpy calls:
option description
b backup files to be replaced
f ensures files have write permission before trying to overwrite them
i prompt when moving files
n do not overwrite existing files
T treat target as a file rather than a directory
u only overwrite files if file to replace is newer
v output which files are being moved and where (verbose)
option description
#### f++ example
Examples of cpy being used with f++:
cpy("sample.txt", "sample1.txt")
cpy dir1 dir2 dir3
cpy{u} *.txt dir
#### n++ example
Example of cpy being used with n++:
@cpy("sample.txt", "sample1.txt")
@cpy dir1 dir2 dir3
@cpy{u} *.txt dir | 2020-08-13 03:02:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5051696300506592, "perplexity": 12583.367341970648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.61/warc/CC-MAIN-20200813014639-20200813044639-00500.warc.gz"} |
https://web2.0calc.com/questions/factor-the-expression-using-the-two-different-techniques | +0
# Factor the expression using the two different techniques listed for Parts 1(a) and 1(b).
0
200
1
+558
Factor the expression using the two different techniques listed for Parts 1(a) and 1(b).
SamJones Feb 24, 2018
#1
+2295
+1
a) In order to factor using this method, let's try and identify the GCF first. 9 is the greatest common factor between 36 and 81. a^4 is the greatest common factor between the a's, and b^10 is the factor for the b's. Let's factor it out!
$$36a^4b^{10}-81a^{16}b^{20}$$ Factor out the GCF, $$9a^4b^{10}$$, like I described earlier. $$9a^4b^{10}\left(4-9a^{12}b^{10}\right)$$ Don't stop here, though! Notice that the resulting binomial is a difference of squares. $$9a^4b^{10}\left(2+3a^6b^5\right)\left(2-3a^6b^5\right)$$
b) The beginning binomial is a difference of squares to begin with, so it is possible to start with this first!
$$36a^4b^{10}-81a^{16}b^{20}$$ Let's do this approach this time! $$\left(6a^2b^5+9a^8b^{10}\right)\left(6a^2b^5-9a^8b^{10}\right)$$ Don't stop yet! Both binomials have their own GCF's! $$3a^2b^5\left(2+3a^6b^5\right)*3a^2b^5\left(2-3a^6b^5\right)$$ Combine the multiplication. $$9a^4b^{10}\left(2+3a^6b^5\right)\left(2-3a^6b^5\right)$$
Well, these are the two techniques.
TheXSquaredFactor Feb 24, 2018
#1
+2295
+1
a) In order to factor using this method, let's try and identify the GCF first. 9 is the greatest common factor between 36 and 81. a^4 is the greatest common factor between the a's, and b^10 is the factor for the b's. Let's factor it out!
$$36a^4b^{10}-81a^{16}b^{20}$$ Factor out the GCF, $$9a^4b^{10}$$, like I described earlier. $$9a^4b^{10}\left(4-9a^{12}b^{10}\right)$$ Don't stop here, though! Notice that the resulting binomial is a difference of squares. $$9a^4b^{10}\left(2+3a^6b^5\right)\left(2-3a^6b^5\right)$$
b) The beginning binomial is a difference of squares to begin with, so it is possible to start with this first!
$$36a^4b^{10}-81a^{16}b^{20}$$ Let's do this approach this time! $$\left(6a^2b^5+9a^8b^{10}\right)\left(6a^2b^5-9a^8b^{10}\right)$$ Don't stop yet! Both binomials have their own GCF's! $$3a^2b^5\left(2+3a^6b^5\right)*3a^2b^5\left(2-3a^6b^5\right)$$ Combine the multiplication. $$9a^4b^{10}\left(2+3a^6b^5\right)\left(2-3a^6b^5\right)$$
Well, these are the two techniques.
TheXSquaredFactor Feb 24, 2018 | 2018-12-18 16:21:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7350713610649109, "perplexity": 790.1293106446703}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829429.94/warc/CC-MAIN-20181218143757-20181218165757-00177.warc.gz"} |
https://networkx.org/documentation/networkx-2.3/reference/generated/networkx.generators.community.ring_of_cliques.html | Warning
This documents an unmaintained version of NetworkX. Please upgrade to a maintained version and see the current NetworkX documentation.
# networkx.generators.community.ring_of_cliques¶
ring_of_cliques(num_cliques, clique_size)[source]
Defines a “ring of cliques” graph.
A ring of cliques graph is consisting of cliques, connected through single links. Each clique is a complete graph.
Parameters: num_cliques (int) – Number of cliques clique_size (int) – Size of cliques G – ring of cliques graph NetworkX Graph NetworkXError – If the number of cliques is lower than 2 or if the size of cliques is smaller than 2.
Examples
>>> G = nx.ring_of_cliques(8, 4)
Notes
The connected_caveman_graph graph removes a link from each clique to connect it with the next clique. Instead, the ring_of_cliques graph simply adds the link without removing any link from the cliques. | 2023-03-29 06:58:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2731693685054779, "perplexity": 4441.5734333001055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00410.warc.gz"} |
http://www.physicsforums.com/showthread.php?s=47f2ab4ffedaefe6db800ba825c484db&p=4495602 | # Vector model of atom. Hopefully easy question.
by LagrangeEuler
Tags: atom, model, vector
P: 250 In system with one electron total angular moment vector ##\vec{j}## is just: $$\vec{j}=\vec{l}+\vec{s}$$ http://selfstudy.in/MscPhysics/BScVectorModelOfAtom.pdf In page 3 author draw a triangle. Intensities of the vectors are ##|\vec{l}|=\sqrt{l(l+1)}\hbar##, ##|\vec{s}|=\sqrt{s(s+1)}\hbar##, ##|\vec{j}|=\sqrt{j(j+1)}\hbar## And then from nowhere ##j=l+s## or ##j=l-s##. Could you please explain me that! Tnx.
Sci Advisor P: 3,266 These are maximal and minimal possible values, respectively. If you add two vectors, the length of the sum will always lie between these two extremal values.
P: 250 But if I look this intensity formulas $$|\vec{l}|+|\vec{s}|\neq |\vec{j}|_{for j=l+s}$$ Right?
Related Discussions Introductory Physics Homework 1 Calculus & Beyond Homework 5 Precalculus Mathematics Homework 4 Biology, Chemistry & Other Homework 1 Atomic, Solid State, Comp. Physics 2 | 2014-03-10 04:45:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6093957424163818, "perplexity": 3279.297075524665}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010638293/warc/CC-MAIN-20140305091038-00079-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://geo.libretexts.org/Courses/University_of_California_Davis/UCD_GEL_56_-_Introduction_to_Geophysics/Text/1%3A_Rheology_of_Rocks/1.5%3A_Summary | Let's review what we've learned about rheology so far. Rheology describes and defines how a material deforms. To deform a material, stress must be applied, which causes strain. When the stress ($$\sigma$$) placed on a rock is greater than $$\sigma_{s-fric}$$ or $$\sigma_{s-frac}$$, the rock will reach its failure point and deform. There are two types of failure a rock can experience, failure by frictional sliding or failure by fracture.
A rock can also deform if it experiences a high degree of stress. The two primary types of deformation are elastic and viscous. Elastic deformation is shallow and has a low magnitude of strain. If the elastic strain is big enough, failure occurs. Viscous deformation occurs deeper and at much higher pressures and temperatures than elastic deformation. Elastic deformation is $$\sigma=Ee$$, and has a constitutive relation, meaning that it defines rheology. E is Young's modulus which illustrates the relationship between stress and strain in a material. Viscous deformation is $$\sigma=2\mu\dot{\epsilon}$$ and is a time derivative ($$\frac{d}{dt}$$). Two common types of viscous flow are Couette Flow and flow down an inclined plane, as we seen in the asthenosphere. Different materials experience viscous deformation at different rates. There is often a large range in viscosity values for the same material, so it is common to only think about viscous flow in terms of order of magnitude. | 2019-07-18 04:55:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6394907832145691, "perplexity": 627.1731759005899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525500.21/warc/CC-MAIN-20190718042531-20190718064531-00487.warc.gz"} |
https://mathoverflow.net/tags/surfaces/hot | # Tag Info
42
Well, all I did was a search on "homeomorphism history", but... I tried to extract some points that are made in conjunction to your question (Riemann, Möbius, Jordan), though feel free to edit it down if it is too long (and apologies to those who think this should be remapped to a History of Math Q/A). The evolution of the concept of homeomorphism, by ...
18
Here are explicit examples when $M$ is compact, connected, and $\chi(M)\le0$. Orientable case: Let $M$ be the 1-point compactification of the hyperelliptic Riemann surface defined in the affine plane $\mathbb{C}^2$ by $$y^2 = x^{2g+1}-1.$$ This is a smooth Riemann surface of genus $g\ge1$ and hence $\chi(M) = 2-2g$. The holomorphic $1$-form $$\omega = \... 16 The answer is already given in the comments (by Ryan Budney and Mizar). But I think it makes sense to clear this confusing point. The classical Gauss-Bonnet formula is [e.g. https://en.wikipedia.org/wiki/Gauss%E2%80%93Bonnet_theorem ]$$\int_M K dA+\int_{\partial M} k_g ds=2\pi \chi(M).$$In this formula nothing requires orientation of M! dA is the ... 15 To answer Joseph's questions: First, it's not impossible to integrate the geodesic flow of the hyperbolic plane in these coordinates, but the formulae I got aren't very nice, so I'm not going to type them in unless I can find a better way to express them. It's probably easier than I got on a first pass through, but I don't have time to work on simplifying ... 15 Yes. 2. Yes. (I suppose that the surfaces are "the same" if they are homeomorphic). For 1, it is sufficient to check the definition of surface: that every point has a neigborhood homeomorphic to the disc. For interior points of the polygon, and for points on the sides, this is evident, and for the corners this is easy. For 2, just recall classification of ... 15 I'll assume we're talking about complex functions; if real, tensor with \mathbb{C}. Now pass to the group of units. With the topology given by spectral radius (this is an algebraic description of the C-* topology), the group of connected components of the group of units is H^1(X, \mathbb{Z}) which of course knows the genus. If you really like ... 15 First, for simple closed curves, this was known long before Freedman-Hass-Scott. For closed surfaces, it was first proved by Baer in Baer, R., Kurventypen auf Flächen. J. reine angew. Math., 156 (1927), 231–246. and Baer, R., Isotopie von Kurven auf orientierbaren, geschlossenen Flächen und ihr Zusammenhang mit der topologischen Deformation der Flächen. ... 14 There are many examples of surfaces in \mathbb{R}^3 with constant negative curvature. They can be described by using the so-called parametrization by Chebyshev nets. Have a look at the paper by Robert McLachlan A gallery of constant-negative-curvature surfaces, The Mathematical Intelligencer 16 (1994), 31-37. However (and this answers your question) ... 14 In general, surfaces in \mathbb{E}^3 for which the principal curvatures satisfy a given functional relation F(\kappa_1,\kappa_2)=0 are said to be Weingarten surfaces (of type F), and the condition for a graph z = f(x,y) to be a Weingarten surface of type F is a single second order PDE for the function f(x,y). The general theory tells you that, ... 13 Also, not an answer but some comments. When one learns about the geometry of smooth surfaces in \mathbb{R}^3, the question of rigidity and flexibility arises quite naturally. And, at first sight, it is plausible that there should be some characterization of these properties in terms of geometric invariants, especially the second fundamental form. However, ... 12 Any smooth compact surface smoothly embedded in \mathbb{R}^3 that is not the 2-sphere must have an infinite fundamental group and hence must have infinitely many distinct (in your sense) geodesics joining any two distinct points. This result follows from Morse theory: If S is the surface and a and b are points on it, then each fixed-endpoint ... 12 This is a particular case of Corollary 1.1 of Edwards, Robert D.; Kirby, Robion C. Deformations of spaces of imbeddings. Ann. of Math. (2) 93 (1971), 63--88. MR0283802, which says that the group of homeomorphisms of any compact manifold is locally contractible. 11 The special feature of X, a sphere with three or more punctures, that is being used here is that the space E(X) of all homotopy equivalences X\to X has \pi_1 E(X)=0. (Here we take the identity map of X as the basepoint of E(X) for computing \pi_1 E(X).) The corresponding statement when X is an annulus is not true, since \pi_1 E(X)={\mathbb Z}... 11 The number of the orbits is infinite. Consider the upper central series, that is a sequence of derived subgroups: G^1=[G,G] and G^{i+1}=[G^{i},G^{i}]. All subgroups G^i are normal in the group G. Since G^1 is free of infinite rank, the sequence \{G^i\}_{i=1,\ldots,\infty} is a sequence of free groups of countable rank that does not stabilize, i.... 11 Problem 2 in the list of open problems that Douglas Zare linked to answers the question (namely that there is a standard candidate, and it is even called the standard triple bubble). I quote it here with a few interspersed comments of my own. Problem 2 (Sullivan) We construct the standard clusters of k bubbles in \mathbb{R}^n (k\leq n+1) as follows.... 11 No, there doesn't exist such a foliation. The existence of any foliation would mean the Euler characteristic is zero, so the surface must be either a torus or a Klein bottle. Foliations for these surfaces are understood well enough to rule out having both dense and non-dense leaves. Any foliation will contain a "Reeb component" (for which no leaf is dense) ... 11 For any topological group G, there is a classifying space BG and a principal G-bundle EG \to BG called the universal principal G-bundle which is determined up to isomorphism by the fact that EG is weakly contractible. On a paracompact topological space X, any principal G-bundle P \to X admits a map f : X \to BG, called a classifying map, ... 11 I think the relevant location is item 23, page 352, but what Hadamard aims to is stated as follows: A smooth, co-orientable surface of \mathbb{R}^3 with Gauss curvature bounded below by some \kappa >0 is simply connected. (implicitly, the surface is compact without boundary) ("Or une surface à deux côtés et sans points singuliers, à courbure ... 10 No. Consider the case of an ellipsoid with three distinct axes, and remove the four umbilic points. Then you cannot find such vector fields on a (punctured) neighborhood of the deleted umbilics. Have a look at this reference on umbilics and try drawing the vector field on such a punctured neighborhood, and you'll see why. 10 The torus has two functions f and g which are (1) relatively prime, (2) each have two square roots, and (3) whose product has 4 square roots. For instance take two functions which vanish on disjoint loops which are not null-homologous. There sphere does not have two such functions because (1) implies that V(f) and V(g) are disjoint and (2) implies ... 10 Counterexamples are easily constructed using the Thurston norm. In fact, any example of a fibered, oriented, closed 3-manifold M, with a fiber of genus \ge 2 and with pseudo-Anosov monodromy, and with 2nd homology of rank \ge 2, gives counterexamples. The Thurston norm on H_2(M;\mathbb{R}) has a polyhedral unit ball, and there is a symmetric set of ... 10 It seems that such pill exists. Take a ball and drill a hole through it, so you get a solid torus; we assume it has smooth boundary \Sigma. By Gauss--Bonnet formula, we gave$$\int\limits_\Sigma G=0, where $G$ denotes Gauss curvature. Denote by $H$ the mean curvature of $\Sigma$; it is mostly very negative in the surface of the hole. It is easy to ...
10
The second statement ought to be in the literature somewhere but I don't know a reference so I'll give an argument. The result can be rephrased in terms of graphs. Let $S$ be a compact connected surface with non-empty boundary and let $P$ be a non-empty finite set of points in the interior of $S$. Consider finite connected graphs $X$ in $S$ with $P$ as ...
10
I would recommend looking at the work of Moira Chas to start. Here are two interesting papers of hers to read: The Goldman bracket and the intersection of curves on surfaces. Combinatorial Lie bialgebras of curves on surfaces She even has an app on her website that computes the bracket for you: Goldman Bracket. How to use the app: in the first box you ...
9
9
The group $F$ is isomorphic to the symmetric group $S_5$. In fact, since $N_5$ is non-orientable of genus $5$, both $F$ and the extended group $F^*$ (of order twice the order of $F$) act on its orientable double cover, that has genus $4$. In Conder's database, this is expressed by saying that the action of $F$ in genus $4$ is reflexible and that there is a ...
8
I think Myers only considered analytic metrics, see his papers "Connections between differential geometry and topology I and II", Duke Math. J. 1 (1935), 376-391, and 2 (1936), 95-102. For arbitrary metrics on $S^2$ the cut locus is indeed a tree. This can be deduced from e.g. in [Shiohama and Tanaka, Cut loci and distance spheres on Alexandrov surfaces] ...
8
McMullen and Taubes 4-manifolds with inequivalent symplectic forms and 3-manifolds with inequivalent fibrations constructs 3-manifolds $N$ with different fibrations, whose Euler classes do not lie in the same $Diff(N)$-orbit. The idea of the proof is that two fibrations can not be in the same $Diff(N)$-orbit if the Poincaré duals of their fibers belong to ...
8
I'm rearranging my answer a little bit because I realized that I overlooked an apparent possibility (that turns out not to occur), and I didn't want my answer to be misleading: If the surface in Euclidean $\mathbb{R}^4$ has positive Gauss curvature and is homogeneous, it will be complete and hence compact. Hence the group of ambient symmetries will have to ...
8
But the existence of an umbilic point on the sphere follows from topological considerations: The sum of the Hopf indices of the umbilics is 1 (by a theorem of Hopf) so there has to be at least one umbilic. Put another way: If there were no umbilics, then union of the principal directions at each point would define a 4-fold covering space of the sphere, ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2020-10-31 22:12:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8458760380744934, "perplexity": 243.9688868797117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922463.87/warc/CC-MAIN-20201031211812-20201101001812-00660.warc.gz"} |
https://algorithm.zone/blogs/enumerating-prime-factors-parallel-search-set-application-questions.html | # 952. Calculate the maximum component size according to the common factor: enumerating prime factors + parallel search set application questions
## Title Description
This is from LeetCode 952. Calculate the maximum component size by common factor , difficulty is difficulty.
Tag: "Mathematics", "joint search set"
Given a non empty array num composed of different positive integers, consider the following figure:
• There are num.length nodes, marked from num[0] to num[num.length - 1];
• Only when nums[i] and nums[j] share a common factor greater than $1$, there is an edge between nums[i] and nums[j].
Returns the size of the largest connected component in the graph.
Example 1:
Input: nums = [4,6,15,35]
Output: 4
Example 2:
Input: nums = [20,50,9,63]
Output: 2
Example 3:
Input: nums = [2,3,6,7,4,12,21,39]
Output: 8
Tips:
• $1 <= nums.length <= 2 \times 10^4$
• $1 <= nums[i] <= 10^5$
• All values in nums are different
## Enumerating prime factors + joint search set
First, consider how to use nums to build a graph. The size of nums is $n = 2 \times 10^4$. Enumerate all point pairs and judge whether there is an edge between two numbers. The complexity is $O(n^2\sqrt{M})$(where $M = 1e5$is the maximum value of $nums[i]$), which does not need to be considered.
Instead of creating a map by "enumerating points + finding common divisors", we can decompose $nums[i]$into prime factors (with a complexity of $O(\sqrt{nums[i]})$). Assuming that the set of prime factors decomposed is $S$, we can create a map from $S_{k}$to $num[i]$mapping relationship. If $num[i]$and $num[j]$have edges, then $num[i]$and $num[j]$will be mapped by at least the same prime factor.
The number of connected blocks can be maintained by using the "join search set", and the mapping relationship can be maintained by using the "hash table".
When maintaining the mapping relationship, use the prime factor as key and the subscript value $i$as value (we use the subscript $i$as the point number instead of $nums[i]$, which is different from $nums[i]$to narrow the size of the parallel search array from $1e5$to $2 \times 10^4$).
At the same time, when maintaining connected blocks using the "union search set", synchronously maintain each connected block size sz and the current maximum connected block size ans.
Java code:
class Solution {
static int N = 20010;
static int[] p = new int[N], sz = new int[N];
int ans = 1;
int find(int x) {
if (p[x] != x) p[x] = find(p[x]);
return p[x];
}
void union(int a, int b) {
if (find(a) == find(b)) return ;
sz[find(a)] += sz[find(b)];
p[find(b)] = p[find(a)];
ans = Math.max(ans, sz[find(a)]);
}
public int largestComponentSize(int[] nums) {
int n = nums.length;
Map<Integer, List<Integer>> map = new HashMap<>();
for (int i = 0; i < n; i++) {
int cur = nums[i];
for (int j = 2; j * j <= cur; j++) {
if (cur % j == 0) add(map, j, i);
while (cur % j == 0) cur /= j;
}
if (cur > 1) add(map, cur, i);
}
for (int i = 0; i <= n; i++) {
p[i] = i; sz[i] = 1;
}
for (int key : map.keySet()) {
List<Integer> list = map.get(key);
for (int i = 1; i < list.size(); i++) union(list.get(0), list.get(i));
}
return ans;
}
void add(Map<Integer, List<Integer>> map, int key, int val) {
List<Integer> list = map.getOrDefault(key, new ArrayList<>());
map.put(key, list);
}
}
TypeScript Code:
const N = 20010
const p: number[] = new Array<number>(N), sz = new Array<number>(N)
let ans = 0
function find(x: number): number {
if (p[x] != x) p[x] = find(p[x])
return p[x]
}
function union(a: number, b: number): void {
if (find(a) == find(b)) return
sz[find(a)] += sz[find(b)]
p[find(b)] = p[find(a)]
ans = Math.max(ans, sz[find(a)])
}
function largestComponentSize(nums: number[]): number {
const n = nums.length
const map: Map<number, Array<number>> = new Map<number, Array<number>>()
for (let i = 0; i < n; i++) {
let cur = nums[i]
for (let j = 2; j * j <= cur; j++) {
if (cur % j == 0) add(map, j, i)
while (cur % j == 0) cur /= j
}
if (cur > 1) add(map, cur, i)
}
for (let i = 0; i < n; i++) {
p[i] = i; sz[i] = 1
}
ans = 1
for (const key of map.keys()) {
const list = map.get(key)
for (let i = 1; i < list.length; i++) union(list[0], list[i])
}
return ans
};
function add(map: Map<number, Array<number>>, key: number, val: number): void {
let list = map.get(key)
if (list == null) list = new Array<number>()
list.push(val)
map.set(key, list)
}
• Time complexity: $O(n\sqrt{M})$
• Space complexity: $O(n)$
## last
This is the No.952 article in our "brush through LeetCode" series. The series began on January 1, 2021. As of the start date, there are 1916 questions on LeetCode, some of which are locked questions. We will finish all the unlocked questions first.
In this series of articles, in addition to explaining the problem-solving ideas, we will also give the most concise code as far as possible. If the general solution is involved, the corresponding code template will also be provided.
In order to facilitate you to debug and submit code on your computer, I have established a relevant warehouse: https://github.com/SharingSou... .
In the warehouse address, you can see the problem solution links of the series of articles, the corresponding codes of the series of articles, the original problem links of LeetCode and other preferred problem solutions.
For more, more comprehensive and popular "written examination / interview" related information, please visit the beautifully arranged Heji new base 🎉🎉 | 2023-02-06 09:59:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43368563055992126, "perplexity": 6541.294183514115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500334.35/warc/CC-MAIN-20230206082428-20230206112428-00664.warc.gz"} |
https://gamedev.stackexchange.com/questions/69145/how-can-i-extract-the-geometry-from-a-j3o-file/69153 | # How can I extract the Geometry from a j3o file?
I created a blender file and then converted it into the j3o file. The only way to load the 3d structure in the game is through a Spacial object:
Spatial towerModel = assetManager.loadModel("Textures/tower.j3o");
Initially the tower in the scene was composed by a simple Geometry:
new Geometry("Tower." + index, new Box(X_SIZE, Y_SIZE, Z_SIZE));
To substitute this implementation with a proper 3d tower I need to use the Gemetry object from the 3jo.
How can I extract the Geometry from the j3o file?
The Spatial given to you from the loadModel method is most likely a Node. You'll have to traverse that node's children (and possibly grand-children) to get to the Geometry, which you will have to cast from one of the child Spatials.
I've not got the code before me right now, so I can't show you, but looking at the Javadoc it should be pretty simple (a bit of recursion should help).
• I found the way to get the geometry, but once I attach it to the scene it doesn't show up; on the contrary in the "scene composer" windows I can see it. (In the blender file there's just the mesh object, I removed the camera and the lights). – Fab Jan 22 '14 at 20:02 | 2021-06-24 11:49:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2562948167324066, "perplexity": 1139.793155255033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488553635.87/warc/CC-MAIN-20210624110458-20210624140458-00328.warc.gz"} |
https://studyadda.com/sample-papers/clat-sample-paper-2_q149/118/252499 | • # question_answer Direction: Each question contains a statement or relationship and a question regarding relationship based on the statement, select the correct option. If 'A + B' means A is the mother of B, 'A - B' means A is the brother of B, 'A % B' means A is the father of B and 'A x B' means A is the sister of B, which of the following shows that P is the maternal uncle of Q? A) $Q-N+M\times P$ B) $P+S\times N-Q$ C) $P-M+N\times Q$ D) $Q-S\,%\,P$
P - M $\to$ P is the bother of M. M + N $\to$ M is the mother of N. N x Q $\to$ N is the maternal uncle of Q. Therefore, P is the maternal uncle of Q | 2022-01-19 04:53:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5345596075057983, "perplexity": 1463.8804647511893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301263.50/warc/CC-MAIN-20220119033421-20220119063421-00174.warc.gz"} |
https://bird.bcamath.org/browse?authority=436be57f-0e8c-40dc-8d9b-2a31a6377d05&type=author | Now showing items 1-6 of 6
• #### End-point estimates, extrapolation for multilinear muckenhoupt classes, and applications
(2019)
In this paper we present the results announced in the recent work by the first, second, and fourth authors of the current paper concerning Rubio de Francia extrapolation for the so-called multilinear Muckenhoupt classes. ...
• #### Mixed weak type estimates: Examples and counterexamples related to a problem of E. Sawyer
(2016-01-01)
In this paper we study mixed weighted weak-type inequal- ities for families of functions, which can be applied to study classic operators in harmonic analysis. Our main theorem extends the key result from [CMP2].
• #### On pointwise and weighted estimates for commutators of Calderón-Zygmund operators
(2017)
In recent years, it has been well understood that a Calderón-Zygmund operator T is pointwise controlled by a finite number of dyadic operators of a very simple structure (called the sparse operators). We obtain a similar ...
• #### Proof of an extension of E. Sawyer's conjecture about weighted mixed weak-type estimates
(2018-09)
We show that if $v\in A_\infty$ and $u\in A_1$, then there is a constant $c$ depending on the $A_1$ constant of $u$ and the $A_{\infty}$ constant of $v$ such that \Big\|\frac{ T(fv)} {v}\Big\|_{L^{1,\infty}(uv)}\le c\, ...
• #### Quantitative weighted mixed weak-type inequalities for classical operators
(2016-06-30)
We improve on several mixed weak type inequalities both for the Hardy-Littlewood maximal function and for Calderón-Zygmund operators. These type of inequalities were considered by Muckenhoupt and Wheeden and later on by ...
• #### Weighted mixed weak-type inequalities for multilinear operators
(2017)
In this paper we present a theorem that generalizes Sawyer's classic result about mixed weighted inequalities to the multilinear context. Let $\vec{w}=(w_1,...,w_m)$ and $\nu = w_1^\frac{1}{m}...w_m^\frac{1}{m}$, the main ... | 2022-09-26 13:14:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.663566529750824, "perplexity": 907.172436718036}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00553.warc.gz"} |
https://greprepclub.com/forum/seven-is-equal-to-how-many-thirds-of-seven-3036.html | It is currently 18 Nov 2018, 08:06
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
Your Progress
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Seven is equal to how many thirds of seven ??
Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
Moderator
Joined: 18 Apr 2015
Posts: 4907
Followers: 74
Kudos [?]: 976 [0], given: 4497
Seven is equal to how many thirds of seven ?? [#permalink] 17 Dec 2016, 03:53
Expert's post
00:00
Question Stats:
100% (00:18) correct 0% (00:00) wrong based on 2 sessions
Seven is equal to how many thirds of seven ??
(A) $$\frac{1}{3}$$
(B) 1
(C) 3
(D) 7
(E) 21
[Reveal] Spoiler: OA
_________________
Director
Joined: 03 Sep 2017
Posts: 521
Followers: 1
Kudos [?]: 327 [0], given: 66
Re: Seven is equal to how many thirds of seven ?? [#permalink] 29 Sep 2017, 07:51
The statement can be translated in formula as $$7 = \frac{x}{3}7$$. Solving for x, we get that 7 is 3 thirds of 7. Answer C!
Director
Joined: 20 Apr 2016
Posts: 743
Followers: 6
Kudos [?]: 500 [0], given: 86
Re: Seven is equal to how many thirds of seven ?? [#permalink] 29 Sep 2017, 10:36
Carcass wrote:
Seven is equal to how many thirds of seven ??
(A) $$\frac{1}{3}$$
(B) 1
(C) 3
(D) 7
(E) 21
To make $$\frac{1}{3}$$ of 7 to 7,
we have to multiply by 3 i.e$$\frac{1}{3} *7 *3$$ =7
_________________
If you found this post useful, please let me know by pressing the Kudos Button
Re: Seven is equal to how many thirds of seven ?? [#permalink] 29 Sep 2017, 10:36
Display posts from previous: Sort by
Seven is equal to how many thirds of seven ??
Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group Kindly note that the GRE® test is a registered trademark of the Educational Testing Service®, and this site has neither been reviewed nor endorsed by ETS®. | 2018-11-18 16:06:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.309740275144577, "perplexity": 8919.898703799603}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744513.64/warc/CC-MAIN-20181118155835-20181118181202-00054.warc.gz"} |
https://eprint.iacr.org/2015/836 | ### Ciphertext-Policy Attribute-Based Broadcast Encryption with Small Keys
Benjamin Wesolowski and Pascal Junod
##### Abstract
Broadcasting is a very efficient way to securely transmit information to a large set of geographically scattered receivers, and in practice, it is often the case that these receivers can be grouped in sets sharing common characteristics (or attributes). We describe in this paper an efficient ciphertext-policy attribute-based broadcast encryption scheme (CP-ABBE) supporting negative attributes and able to handle access policies in conjunctive normal form (CNF). Essentially, our scheme is a combination of the Boneh-Gentry-Waters broadcast encryption and of the Lewko-Sahai-Waters revocation schemes; the former is used to express attribute-based access policies while the latter is dedicated to the revocation of individual receivers. Our scheme is the first one that involves a public key and private keys having a size that is independent of the number of receivers registered in the system. Its selective security is proven with respect to the Generalized Diffie-Hellman Exponent (GDHE) problem on bilinear groups.
Available format(s)
Category
Public-key cryptography
Publication info
Preprint. MINOR revision.
Keywords
Contact author(s)
pascal junod @ heig-vd ch
History
Short URL
https://ia.cr/2015/836
CC BY
BibTeX
@misc{cryptoeprint:2015/836,
author = {Benjamin Wesolowski and Pascal Junod},
title = {Ciphertext-Policy Attribute-Based Broadcast Encryption with Small Keys},
howpublished = {Cryptology ePrint Archive, Paper 2015/836},
year = {2015},
note = {\url{https://eprint.iacr.org/2015/836}},
url = {https://eprint.iacr.org/2015/836}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content. | 2023-03-27 00:46:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23208767175674438, "perplexity": 3607.6464900090286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00106.warc.gz"} |
http://mathoverflow.net/questions/58495/why-hasnt-mereology-suceeded-as-an-alternative-to-set-theory?sort=oldest | Why hasn't mereology suceeded as an alternative to set theory?
I have recently run into this wikipedia article on mereology. I was surprised I had never heard of it before and indeed it seems to be seldom mentioned in the mathematical literature. Unlike set theory, which is founded on the idea of set membership, mereology is built upon what I consider conceptually more elementary, namely the relation between parts and the whole.
Personally, I have always found a little bit unsatisfactory (philosophically speaking) the fact that set theory postulates the existence of an empty set. But of course there is the technical aspect and current axiomatizations of set theory seem to be quite good regarding what it allows us to prove.
Now it seems there have been some attempts to relate mereology and set theory, and according to the article, some authors have recently tried to deduce ZFC axioms as theorems in certain axiomatizations of it. Yet, apparently only a couple of well trained mathematicians (one of them Tarski) have discussed mereology, since most people have shown indifference towards the whole subject.
So my questions are: how is it that mereology had no success as a possible foundation for mathematics? Are axiomatizations based on mereology not suitable for most developments or simply not worth the while? If so, which would be the technical reason behind?
-
It doesn't have to have no success; even if it has the same success, there's still no incentive to switch. It needs to have greater success in order to make a switch seem like a good idea, and meanwhile we have category theory...! – Qiaochu Yuan Mar 15 '11 at 1:29
Things fall apart; the centre cannot hold / Mere ology is loosed upon the world... – Yemon Choi Mar 15 '11 at 5:23
@Qiaochu: A comment from Eric Raymond on Plan 9 may be in order here: "Compared to Plan 9, Unix creaks and clanks and has obvious rust spots, but it gets the job done well enough to hold its position. There is a lesson here for ambitious system architects: the most dangerous enemy of a better solution is an existing codebase that is just good enough." The same could be said of bases for doing mathematics. – Robert Haraway Mar 15 '11 at 13:55
This may sound harsh, but: where is the math question here? The OP's motivations for considering mereology seem to be a mixture of psychological and philosophical -- "mereology is built upon what I consider conceptually more elementary" -- but what would be a putative mathematical advantage of having mereological foundations? Note that the majority of working mathematicians are not only happy with set theory as a foundation: moreover, they don't want to think about foundational issues at all, and the (naive) concept of a set is something they have accepted since their school days. – Pete L. Clark May 9 '11 at 2:04
@ Pete: Whatever my motivation for asking the question might be (which you can or cannot consider worth the while), the question asks precisely about why mereological foundations are not suitable, compared to set theory; which is a rather technical matter (certainly mathematics). – godelian May 9 '11 at 2:40
Unlike category theory which is in many ways a freer framework in which to do mathematics and which very nicely captures universal objects and constructions (e.g., limits and colimits), mereology is a more restrictive framework than set theory. The whole/part relation can be captured by set/subset, but set/member cannot simply be recaptured in mereology. For instance, in mereotopology a space is comprised entirely of extended parts, no points. Try reformulating the separation axioms and deriving Urysohn's theorem, for example. (Maybe it can be done. I think so. But it's not immediately clear how.) For these reasons, mereology will remain of interest to nominalistically inclined mathematical philosophers (like Tarski, not to mention Russell and Whitehead in whose work I find mereological inclinations) but is not likely to spark a major mathematical research program, in my opinion.
-
Locale theory is topology without points. It proves Tychonoff theorem without using choice. In fact, it is a good idea to consider spaces as more than just bags of points. – Andrej Bauer Mar 15 '11 at 3:56
Thanks! I didn't mean to suggest it was a bad idea. – Jeremy Shipley Mar 15 '11 at 4:21
I thought that points were definable in mereology as objects that have no proper parts (after you get rid of the empty set's object). What's the obstruction that prevents mereology from getting set theory as a definitional extension in that way? – Carl Mummert Mar 15 '11 at 12:03
As a quibble, locale theory can certainly prove a result that is analogous to Tychonoff's theorem without AC, but because Tychonoff's theorem implies AC over ZF it's impossible to prove the actual Tychonoff theorem in ZF or in any constructive theory that is a subtheory of ZF when viewed from a classical standpoint. – Carl Mummert Mar 15 '11 at 12:08
My main point in answering the question is that mereology is more restrictive. Although it is true that interesting mathematics arises from adopting restrictions (intuitionism, constructivism), more restricitve frameworks are not likely to supplant less restrictive frameworks as widely adopted working foundations, in my opinion. – Jeremy Shipley Mar 15 '11 at 14:34
Lesniewski's idea was not only to replace set theory with mereology but to construct entirely new foundation for mathematics which consisted of three systems:
• prototethics - the counterpart of propositional logic
• ontology - which from contemporary point of view is a first-order theory of a binary predicate, this could be roughly described as a theory of "is" (but do not confound it with $\in$)
• mereology - nominalistically motivated theory of sets.
Lesniewski's motivations were first of all philosophical in spirit. He wrote explicitly that he could not accept either the notion of class of Russell's and Whitehead's or the notion of the extension of a concept of Frege's. Moreover he could not accept existence of the empty class. One of the most important, so to say, technical motivations was Russell's paradox.
As for mereology (I know very little about other systems) Lesniewski's original system of axioms (as well as the one introduced by Leonard and Goodman under the name calculus of individuals) is definitely too weak to reconstruct even a fragment of arithmetic, for example. It was proved by Tarski (in the 30's of the previous century) that Lesniewski's mereology determine structures which bear a very strong resemblance to complete Boolean algebras. Every mereological structure can be transformed into complete Boolean lattice by adding zero element (its non-existence is a consequence of axioms for mereology). And vice versa, every complete Boolean lattice can be turned into (mutatis mutandis) a mereological structure by deleting the zero element. Thus it is by far too little to think of rebuilding mathematics in this framework.
However, as it was said by Jeremy Shipley above there is some work towards building point-free geometrical and topological systems based on mereology enhanced with some additional relation which according to its intended interpretation is to model the situation in which regions are in contact (or are separated). Alfred Tarski himself was one of the first to do this in his Foundations of geometry of solids. One can then try to express separation axioms in the language of mereology plus connection, or require some other topological properties by means of axioms put upon connection. These all can be done, however usually with an application of ZF (ZFC) on metalevel, which is far from Lesniewski's intentions.
-
It seems worthwhile to point out that Steve’s answer also essentially answers Carl Mummert’s question (in a comment) about why one can’t get set theory as a definitional extension of mereology by defining points (as things with no proper parts) and then using “point $x$ is a part of object $y$” as the mereological interpretation of $x\in y$. You can indeed handle sets of points this way, but there’s no good way to handle sets of sets. Mereology (at least in Leśniewski’s version — I’m not familiar with other versions) would make no distinction between a collection of sets and the union of those sets. I think you can get somewhat closer to set theory by combining (as Leśniewski did) mereology with ontology, but even then I don’t think you get anywhere near ZF. To really handle something like the cumulative hierarchy of ZF (or even the shorter hierarchy of Russell-style type theory, I believe), mereology would have to be supplemented with some way to treat sets as (new) points, something like Frege’s notion of Wertverlauf (which would probably be anathema to Leśniewski).
-
Either my browser (Safari) or MO software seems to prefer French to Polish. It allows me to put an acute accent over an e, but when I try to put an acute accent over an s (as in Lesniewski) it inserts a space before the s and puts the accent on that. So please imagine that all occurrences of "Lesniewski" have an acute accent over the first s. – Andreas Blass Aug 15 '12 at 14:27
@Emil: Thanks for adding the accents. – Andreas Blass Aug 15 '12 at 16:24
In algebraic set theory a la Joyal and Moerdijk, the subset relation is taken as fundamental, with membership only being a derived notion (specifically, the cumulative hierarchy is taken to be the free "ZF-algebra"*; i.e., partial order with small joins and an abstract "singleton" operator. The order corresponds to subsethood, and x is defined to be an element of y just in case the singleton operator applied to x yields a subset of y). I can never quite grasp what it is that mereology is supposed to be all about as a supposed contrast to set theory, but if it's just a matter of viewing subsethood as more elementary a concept than membership, well, there you go.
[*: ZF-algebra isn't a great name for the general concept of such structures, in my opinion, since they have very little to do with specifically Zermelo-Fraenkel set theory. Note that, while every object in the cumulative hierarchy is uniquely a join of singletons (and in this way can be viewed as a plain old bag of elements), in more general ZF-algebras, there may be objects which are not joins of singletons, thus carrying a more mereological flavor; in particular, these illustrate that subsethood is not definable in terms of membership, firmly establishing subsethood as the more primitive notion in this context]
-
I decided to add one more answer (instead of editing the previous one), since it is quite long. This will mainly address the OP question, Andreas Blass answer and Carl Mummert comment about defining sets as sets of atoms (points) in mereology. I hope it will shed some light on mereology and its relation to set theory.
In mereology, as it is done in Lesniewskian tradition, it is assumed that part of relation (in symbols: $\sqsubseteq$) is a partial order (reflexive, antisymmetrical and transitive) and that it satisfies the separation condition (those familiar with forcing will find it very familiar): $$\neg x\sqsubseteq y\longrightarrow\exists z(z\sqsubseteq x\wedge z\mathrel{\bot} y)$$ where $z\mathrel{\bot} y\iff\neg\exists u(u\sqsubseteq z\wedge u\sqsubseteq z)$ ($z$ and $y$ are incompatible, otherwise they are compatible). The crucial point is a definition of mereological sum (sometimes called fusion as well). The very idea of mereological sum is hidden in the following equivalence:
an object $x$ is a mereological sum of the group of $S$-es if and only if every $S$ is part of $x$ and every part of $x$ is compatible with some $S$.
Notice that it is a consequence of the definition that there cannot be a mereological set of an empty group of objects. Using sets and set theoretical notation we may define the sum of a set $X$ as binary relation in the following way: $$x\mathrel{\mathrm{Sum}} X\iff \forall y(y\in X\longrightarrow y\sqsubseteq x)\wedge\forall y(y\sqsubseteq x\longrightarrow\exists z(z\in X\wedge\neg z \mathrel{\bot} y).$$ What is usually called classical mereology is a second order system which is obtain by adding the following axiom: $$\forall X(X\neq\emptyset\longrightarrow\exists x(x\mathrel{\mathrm{Sum}} X).$$ Building a first-order system is a little bit more painstaking. To simplify things a bit we may introduce some auxiliary notation: $$x\mathrel{\mathbf{sum}_y}\varphi(y)$$ as an abbreviation of the following formula: $$\forall y(\varphi(y)\longrightarrow y\sqsubseteq x)\wedge\forall u(u\sqsubseteq x\longrightarrow\exists z(\varphi(z)\wedge \neg z\mathrel{\bot} u)).$$ "$x\mathrel{\mathbf{sum}_y}\varphi(y)$" may be read as $x$ is a mereological sum of all $\varphi$-ers. From this we can prove for example that:
• $\forall z(z\mathrel{\mathbf{sum}_y}\text‘z=y\text')$
• $\forall z(z\mathrel{\mathbf{sum}_y}\text‘z\sqsubseteq y\text')$.
In this setting, mereological sum existence axiom schema can be expressed as: $$\exists x\varphi(x)\longrightarrow\exists y(y\mathrel{\mathbf{sum}_x}\varphi(x)).$$ Since the consequence of the axioms presented is that there can only be one mereological sum of $\varphi$-ers we can introduce notation (analogous to the set-theoretical abstraction operator): $$\bigl[x\mid\varphi(x)\bigr],$$ for those formulas, which are satisfied by at least one object. Now, important thing is that: $$x=\bigl[x\bigr]$$ so we cannot distinguish between any given object and its mereological singleton (so to say), which is the first problem to interpret ZF(C).
Defining proper part as $x\sqsubset y\iff x\sqsubseteq y\wedge x\neq y$ we may define mereological atoms (or points, if you prefer the name): $$\mathrm{Atom}(x)\iff\neg\exists y(y\sqsubset x).$$ Now, in case $a_1,\ldots,a_n$ are atoms we can indeed treat $\bigl[a_1,\ldots,a_n\bigr]$ as a counterpart of $\{a_1,\ldots,a_n\}$ (and similarly in case of infinite collections), thus in this case the interpretation suggested by Carl Mummert and mentioned by Andreas Blass: $$x\in y\iff\mathrm{Atom}(x)\wedge x\sqsubset y,$$ works fine. But it does not work for example for: $$\bigl[\bigl[a_1,\ldots,a_n\bigr],\bigl[b_1,\ldots,b_m\bigr]\bigr]=\bigl[a_1,\ldots,a_n, b_1,\ldots,b_m\bigr],$$ since under the interpretation in question for every $a_i$: $$a_i\in\bigl[\bigl[a_1,\ldots,a_n\bigr],\bigl[b_1,\ldots,b_m\bigr]\bigr].$$ Thus, as Andreas already pointed to it, there is no way to differentiate between sets of atoms and sets of sets of atoms and so on. Everything is reducible to a mereological set of atoms. (It is worth mentioning here as well that existence of atoms is independent from the axioms of the classical mereology.)
To conclude this lengthy post, the crucial distinction between mereological sets and, so to say, standard ones is (I think) hidden in the following fact. The equivalence below is true about sets (with obvious restrictions, but assume that we limit our attention to a domain which is a set): $$\varphi(x)\iff x\in\{z\mid\varphi(z)\},$$ while its mereological counterpart is usually not true. That is it is the case that: $$\varphi(x)\longrightarrow x\sqsubseteq\bigl[z\mid\varphi(z)\bigr],$$ but is NOT the case that: $$x\sqsubseteq\bigl[z\mid\varphi(z)\bigr]\longrightarrow \varphi(x).$$
EDIT: Originally I suggested that it might be interesting to consider a system of mereology with the implication above taken as an axiom. However, in the comment below Andreas pointed to the fact that this entails linearity of $\sqsubseteq$. The consequence is that the class of models of the theory which consists of poset axioms+separation+existence of mereological sums narrows down to one-element (up to isomorphism) class, the only model being degenerate one-element structure.
As Jeremy Shipley wrote above (in comments) part of is a decent interpretation of subsethood, but not membership. There are still some other points worth mentioning, but this post has already got out of control.
-
I experience some problems with TeX notation - I write \{ and \} but the brackets are not visible in my browser. Could somebody please help me with this? – Mad Hatter Aug 17 '12 at 19:01
Fixed. You need to write these as \\{ \\} (or \lbrace \rbrace). – Emil Jeřábek Aug 17 '12 at 19:07
You wrote that it may be interesting to consider mereology with the additional axiom that if $x$ is part of the sum of the $\phi$-ers then $x$ is itself a $\phi$-er. This axiom looks very strange to me for the following reason. Consider any two things $a$ and $b$, and let $\phi(z)$ say "$z=a$ or $z=b$". Let $s$ be the sum of the $\phi$-ers, i.e., of $a$ and $b$. Since $s$ is part of itself, your axiom would require $\phi(s)$. So $s$ would be one of $a$ and $b$, say $a$. Since $b$ is part of $s$, we'd get that $b$ is part of $a$. Conclusion: Of any two things, one is part of the other. – Andreas Blass Aug 17 '12 at 20:33
@godelian: You can find something about non-wellfounded approach to mereology in the paper by A.J. Cotnoir and A. Bacon "Non-wellfounded mereology", Review of Symbolic Logic / Volume 5 / Issue 02 / June 2012, pp. 187-204 . Hope this helps. – Mad Hatter Aug 18 '12 at 11:36
The fact that the only structure satisfying axioms for mereology plus the schema in question can be shown directly using the fact that mereology axioms entail existence of the unity $\mathbf{1}$, that is the object $x$ such that $\forall y(y\sqsubseteq x)$. One can now put $\varphi(x)\iff\forall y(y\sqsubseteq x)$. Since for any object $y$ it is the case that $y\sqsubseteq\mathbf{1}=\bigl[x\mid\varphi(x)\bigr]$, the axiom entails $\forall z(z\sqsubseteq y)$, that is $y=\mathbf{1}$. Andreas, thank you very much once again for the comment! – Mad Hatter Aug 18 '12 at 15:04
The following remarks reflect personal research that may be relevant to the idea of a mereological foundation.
I devised a set of sentences intended to admit a universal class to Zermelo-Fraenkel set theory. The strategy involved a primitive part relation and a primitive membership relation with additional axioms to deal with identity and recharacterizing the part relation as a subset relation.
The proper part relation can be expressed as a self-defining predicate with a circular syntax. For this reason, I view the system as related to mereology.
The membership relation depends on the part relation, but is also introduced with a circular syntax.
The sense of these sentences is that to be a subset cannot exclude being a basic open set for a topology. To be an element cannot exclude being an element of a basic open set for a topology.
No functions or constants have such definition. A grammatical equivalence with relation to the primitive relations is defined. A first-order identity is defined after certain axioms establish familiar relations with respect to class equivalence. Second-order extensionality holds, but it is not the criterion of identity. Functions and constants may be introduced only with non-circular syntax in relation to the first-order identity predicate.
Although mereology is generally thought of in terms of the proper part relation, if one reads Lesniewski, there is a great deal of effort involved with investigation of logical equivalence. This work is done in response to Tarski's paper on primitive logistic. Tarski's analysis is done in second-order logic, as is Lesniewski's.
So, the manipulations to obtain an identity relation are consistent with Lesniewski's work, even though it does not seem that way because the usual feature discussed is the part relation.
All objects are classes, with exactly one class as a proper class. The proper part relation is essential to establish this distinction. The first-order identity relation is also essential since the single class that is not an element of any class is unique by virtue of first-order identity. Second-order extensionality does not permit this distinction. The sole proper class is the set universe.
Again, this is consistent with Lesniewski's work. In objecting to Russell's paradox, Lesniewski develops this notion of a full class. This becomes the general mereological principle that a class and its parts are uniform.
The membership relation could be stratified using the proper part relation. But, to establish singletons relative to the modified axiom of pairing, an empty set had to be assumed. This is not a typical mereological assumption. This stratification is comparable to what Quine found necessary in order to have a universal class for his New Foundations. If compared with Euclid, the empty set is "that which has no parts". It is the ground for units which are "that by which what exists is one".
There is a power set axiom. However, a similar axiom only collecting proper parts is included as needed to form the first-order identity. This, too, is comparable to Quine whose system has Cantorian and non-Cantorian classes. In order for the set universe to be differentiated from its elements, proper parts had to be associated with the membership relation in the sense of a power axiom. Once a first-order identity is described, the usual power set axiom can be defined for the Cantorian "finished classes".
If these things do not sound bad enough, the model theory would necessarily be unacceptable to those committed to a predicative model construction strategy. The mereological or topological emphasis is viewed as a second-order structure in spite of the manipulations to obtain a first-order identity relation. This is consistent with the Tarskian analysis and the Lesniewskian program of research. But, it is non-standard with respect to modern foundational thinking.
In this sense, the system is Brouwerian. Logicism and logical atomism reduce the notion of object to presupposed denotations and treat the universe as Ax(x=x) with respect to ontology. When Leibniz introduced the principle of identity of indiscernibles, he did so while invoking geometric principles. The system interprets the Cantorian theory of ones in relation to his topological ideas as reflecting Leibniz' original statement. This is actually the source of the stratified membership relation. I compare it to Brouwerian ideals in that a focus on geometry is a rejection of the logicist interpretation of Leibniz principle of identity of indiscernibles.
In general, it would be best to view the structure as a closure algebra. The set universe would be the intersection over the empty set. So, the system is closed under arbitrary intersection in the same sense that an axiom of union may be interpreted as arbitrary union. With regard to statements in Aristotle, a choice has been made with regard to what "exists". In naive set theory and set theories such as New Foundations, no distinction is made with respect to partitions in relation to negation. Aristotle remarks that one should not attempt to negate substance. A closure algebra interpretation makes a distinguishing choice of closed sets over open sets. This actually derives from the model-theoretic axiom of foundation. The transitive closures satisfy the closure axioms.
It is a very strong system. It is as least as strong as Tarski's axiom. So, it would be modeled by an inaccessible cardinal or stronger.
Although this system will never be published, it was developed carefully. I hope that these remarks help anyone who might wonder what would be involved in a mathematics based on a part relation. But, if you read Lesniewski, and the paper by Tarski, you will see that much of a Lesniewskian system has nothing to do with the part relation. The part relation had merely been an outcome of his analysis of Russell's paradox, and, he insisted that the paradox should be ignored in the development of foundations because it was the result of a mistaken analysis concerning classes.
- | 2014-12-22 07:04:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8492316603660583, "perplexity": 653.0674822582467}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802774894.154/warc/CC-MAIN-20141217075254-00064-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/134624/difference-between-calculated-inclusion-probability-and-what-is-returned-by-samp | # Difference between calculated inclusion probability and what is returned by sampling function?
I have a (small) population from which I wish to sample. I assign probabilities proportional to $y$. I enumerate the possible samples and then determine the probability of each sample occurring based on the product of the probabilities for each $y_i$ in the sample. I add up the probabilities for the samples that contain the $y_1$ and I believe (incorrectly?) that under the assumption of independence (i.e. with replacement sampling) this gives me the inclusion probability for $y_1$. I look at the inclusion probabilities returned by the inclusionprobabilities function in the sampling package and I get a different answer. I do not understand why, is someone able to explain?
library(survey)
library(sampling)
library(gtools)
set.seed(123)
y <- c(1190,26751,68570,34536)
p <- y/sum(y)
df <- data.frame(permutations(n=length(y), r=2, v=1:length(y), repeats.allowed = T))
df$p <- p[df$X1] * p[df$X2]; df # X1 and X2 denote the index of the y value that is included # in the sample. X1 X2 p 1 1 1 0.00008245932 2 1 2 0.00185367169 3 1 3 0.00475145854 4 1 4 0.00239312195 5 2 1 0.00185367169 6 2 2 0.04167022794 7 2 3 0.10681198947 8 2 4 0.05379697926 9 3 1 0.00475145854 10 3 2 0.10681198947 11 3 3 0.27378782541 12 3 4 0.13789611111 13 4 1 0.00239312195 14 4 2 0.05379697926 15 4 3 0.13789611111 16 4 4 0.06945282329 samplesSet <- data.frame(df[1 == df$X1 | 1 == df$X2, ]) sum(samplesSet$p)
pik <- inclusionprobabilities(y, 2)
data.frame(pik=pik,name=1:length(y))
Update: Thanks both @whuber and @StasK. It is clear that the inclusion probabilities reflect sampling without replacement. However, I am uncertain what the inclusion probabilities returned by inclusionprobabilities are. They seem to be calculated as:
$$n \frac{y_i}{\sum_{i=1}^{N} y_i}$$
and have an adjustment to ensure that no probability is greater than 1 and also that the sum of the probabilities corresponds to the sample size.
If I assume that my population is $y=\{1,2,3\}$ such that the probabilities of selection are $\frac{1}{6}$, $\frac{2}{6}$ and $\frac{3}{6}$ and then I take a sample of 2, I calculate the inclusion probabilities to be $\frac{5}{12}$, $\frac{11}{15}$ and $\frac{17}{20}$ respectively. Clearly, these are not what is returned by inclusionprobabilities and so my question now is have I calculated the inclusion probabilities incorrectly or is the inclusionprobabilities function returning something that represents the inclusion probabilities but isn't actually the inclusion probabilities?
myn <- 2
a <- c(1,2,3)
p <- myn * a/sum(a); p
[1] 0.3333333 0.6666667 1.0000000
inclusionprobabilities(a, myn)
[1] 0.3333333 0.6666667 1.0000000
Thanks.
• The help is indeed abysmal. Protect yourself by testing this function on simple arguments with known answers. For instance, inclusionprobabilities(1:2,2) returns the vector 1 1. What does that tell you about the assumed form of sampling? Could this possibly reflect sampling with replacement? (Such prophylactic testing is essential when learning to use any package--the ultimate arbiter of questions like this is what the computer does, not what the help pages seem to say!) – whuber Jan 23 '15 at 16:53
• Sampling with unequal probabilities is really weird, and it does not always give you the answer you expect, although that mostly has to do for sampling with replacement. Selection probabilities, and especially the pairwise selection probabilities, depend on the particular sampling algorithm, see Hanif & Brewer (1983) -- nearly impossible to find -- and Tille 2006. If inclusionprobabilities() indeed refer to sampling WOR, you need to filter on X1!=X2. – StasK Jan 24 '15 at 4:31
• Thank you @whuber. The return of 1 1 tells me that it does not reflect sampling with replacement. I have done some additional tests and updated my original question. It is still not clear to me what the inclusionprobabilities() function is returning. – t-student Jan 27 '15 at 4:59
Sampling with replacement is boring. Sampling without replacement is very interesting. That's why the authors of library(sampling) restricted their attention to sampling WOR. So inclusionprobabilities() takes the baseline rates in your y, and figure out what would the inclusion probabilities be should a proper unequal probability WOR sampling algorithm applied to these numbers.
Looking at the source code, I imagine that your snippet of code reproduces the "regular" case of inclusionprobabilities() when none of the inclusion probabilities exceed 1. In that regular case, the inclusion probabilities are simply the input probabilities scaled up so that their sum is equal to the target sample size. Note that inclusion probabilities refer to the units on the frame, rather than the specific samples, as your code does.
For sampling with replacement, I believe your calculations are correct, in that probability of each pair is the product of probabilities. Then what inclusionprobabilities refers to are the sums across all rows where either X1 or X2 are equal to 1, 2, 3 or 4 (the indices of the original units):
for(k in 1:4) {
print(sum(df$p[df$X1==k|df\$X2==k])) | 2019-10-20 06:05:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8754982948303223, "perplexity": 677.1485436051244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986703625.46/warc/CC-MAIN-20191020053545-20191020081045-00467.warc.gz"} |
https://sciencepolicyreview.org/2020/08/antibiotic-resistance-how-to-prevent-the-next-public-health-emergency/ | # Antibiotic resistance: How to prevent the next public health emergency
Emma H. Yee*, Steven S. Cheng, Grant A. Knappe, and Christine A. Moomau
Edited by Shruti Muralidhar and Anthony Tabet
Article | Aug. 20, 2020
*Email: : ehyee@mit.edu
DOI: 10.38105/spr.7bhjorymhn
## Highlights
• Currently a top 10 cause of death in the US, antibiotic-resistant infections continue to accelerate; we need to understand and address this global health threat
• Antibiotic over-prescription contributes to accelerated antibiotic resistance but can be combated by developing rapid diagnostics and antibiotic stewardship initiatives
• Governments can legislate policies to stimulate new antibiotic production, while mandating equitable development and sustainable usage
## Article Summary
Antibiotics are a vital component of global health. By killing or inhibiting the growth of bacteria, antibiotics treat infections like pneumonia, staph, and tuberculosis.By preventing infections, they enable major medical procedures such as surgeries and chemotherapy. However,bacteria are becoming increasingly resistant to current antibiotics, causing an estimated 34,000 deaths annually in the US. Left unchecked, antibiotic resistance will have major public health consequences, causing over 5 million deaths each year by 2050. Major causes of this crisis are the misuse of existing antibiotics and the slow development of new antibiotics. To incentivize responsible use, governments and institutions are initiating education programs, mandating comprehensive hospital antibiotic stewardship programs, and funding the development of rapid diagnostics. To bring new antibiotic drugs to market, the US government and other non-governmental organizations are funding scientific research toward antibiotic development.Additional incentives are being pursued to improve the commercial viability of antibiotic development and protect drug developers from the unique challenges of the antibiotic market. With diligent efforts to improve responsible use and encourage novel antibiotic drug discovery, we can decrease the global disease burden, save money, and save lives.
Antibiotics are drugs that kill or inhibit the growth of bacteria, and we have them to thank for the 25-year increase in American life expectancy in the last century[1,2]. In 1900, the three leading causes of death were bacterial infections: pneumonia, tuberculosis, and diarrhea/enteritis[3]. Penicillin, the first antibiotic, was discovered in 1928. But it was not until World War II, when wounded soldiers were more likely to die from infections than the injuries themselves, that governments realized penicillin’s life-saving potential[4]. The US government began developing and mass-producing penicillin through unprecedented public, private, and international collaborations, prompting a new era of antibiotics. Antibiotics are now used to treat a myriad of common infections like strep throat, meningitis, tuberculosis, tetanus, urinary tract infections, and food poisoning. They also enable medical procedures that otherwise create a high risk of infection, such as invasive surgery, organ transplantation, and chemotherapy[5]. However, antibiotics are not “one size fits all”; certain types of antibiotics are only effective against certain kinds of bacteria, and all antibiotics are ineffective against viruses[6].
Antibiotics kill or inhibit bacterial growth via various mechanisms of action; they might attack the protective bacterial cell wall, interfere with bacterial reproduction, or interrupt production of molecules necessary for the bacteria’s survival[7]. However, bacteria reproduce and evolve rapidly, changing over time to resist an antibiotic’s destructive mechanism of action. In fact, the more we use antibiotics, the faster bacteria evolve to resist those antibiotics. As bacteria reproduce, random DNA mutations will occur. Most random mutations have no effect on the bacteria, but sometimes a mutation will give the bacteria a special ability to resist an antibiotic—for instance, the mutation may change the cellular target of the antibiotic, or allow the bacteria to pump the drug out of the cell. When an antibiotic is used on bacteria, most of the population will die, but if any of the bacteria have one of these resistance-conferring mutations, they will survive and continue to reproduce, until the entire population is resistant[5]. The use of antibiotics therefore creates environments where bacteria with antibiotic resistance mutations are more likely to survive and reproduce, while susceptible bacteria are gradually killed off.
Figure 1: Use of an antibiotic gradually increases the prevalence of resistant bacteria. If any cell has developed characteristics allowing it to resist attack by an antibiotic, it is more likely to survive and multiply.
This means that, over time, the bacteria that cause infections in humans are more and more likely to be resistant to common antibiotics. It is important to note that bacteria develop antibiotic resistance–not people. But when people use lots of antibiotics, they change bacterial populations such that more and more bacteria are resistant to those antibiotic drugs. This illustrates the double-edged sword nature of antibiotic use: antibiotics are immensely valuable for combating countless infections and enabling medical procedures, but the more we use them, the less valuable they become.
Today, antibiotic resistance is accelerating at alarming rates. The Centers for Disease Control and Prevention (CDC) estimates there are 3 million antibiotic resistant infections in the US every year, causing at least 34,000 deaths[5]. Globally, at least 700,000 deaths occur due to resistant infections, most of which are bacterial; the actual number is likely higher due to poor reporting and surveillance[8]. The prospect of widespread antibiotic resistance threatens to bring society into a post-antibiotic age where infections are more expensive and difficult to treat. This is a threat to not only public health but also the economic stability of the healthcare system[9] and national security[10].
Figure 2:Annual global deaths due to different factors. Antimicrobial resistance (AMR) accounts for resistance from bacteria, as well as fungi, viruses, parasites, and other microbes[15].
This review will focus on medical use of antibiotics in humans in the US, but antibiotic use in animals and agriculture are also major contributors to the current crisis[6]. It is also critical to understand that combating antibiotic resistance will require global cooperative action because infection-causing bacteria spread rapidly between cities, countries, and continents. A large part of addressing antibiotic resistance in the US is assisting and coordinating with other governments, especially those in low-income countries which have the highest instances of antibiotic resistance, but the fewest resources to deal with it[11]. It is also vital to understand the causes of antibiotic resistance in the US and effective actions US institutions can take.
Misuse and Overuse of Antibiotics
Overuse of antibiotics is a major contributor to the rapid proliferation of antibiotic resistant infections. It is estimated that US doctors’ offices and emergency departments prescribe about 47 million unnecessary antibiotic courses annually, amounting to 30% of all antibiotic prescriptions[12]. Many studies show that even when illnesses do require antibiotics, prescribed time courses are significantly longer than national guidelines[13, 14].
Rapid Diagnostics and Antibiotic Prescriptions: A major cause of ubiquitous antibiotic overuse is a lack of rapid methods for diagnosing infections. Physicians rely on tests that usually take days to weeks to identify if an infection is bacterial and, if so, which antibiotics will be most effective. Waiting this long can be harmful or even fatal for patients[15]. Therefore, physicians usually prescribe broadly effective antibiotics while knowing little about the nature of the infection[15]. This can save lives, but if the infection is caused by a virus or resistant bacteria, the antibiotics will not treat the illness and will give resistant strains a chance to further multiply, leaving patients susceptible to additional infections.
With growing awareness in the last 5-10 years that appropriate antibiotic use is difficult with current diagnostics, the CDC, the National Institute of Allergy and Infectious Diseases (NIAID), and the Biomedical Advanced Research and Development Authority (BARDA) have collectively awarded hundreds of millions of dollars to state health departments, businesses, and universities to develop rapid diagnostics[16]. BARDA and NIAID also organized a $\$$20 million prize, the Antimicrobial Resistance Diagnostic Challenge[17], and fund the global non-profit, CARB-X, which has invested \$$82.5 million in 55 projects worldwide for antibiotic resistance research, including diagnostics[18]. This surge in resources and funding has increased rapid diagnostic development. For example, the NIAID funded development of BioFire’s FilmArray[19], which is now an FDA-cleared diagnostic test available for purchase in the US[20]. In just an hour, it tests patient samples for several common types of bacteria, viruses, and yeast, including antibiotic resistant ones[21]. However, new diagnostic technologies have limited effectiveness when they fail to meet practical cost and resource requirements. Cepheid’s GeneXpert MTB/RIF test, for example, can diagnose tuberculosis infection and determine resistance to rifampicin, a common antibiotic for tuberculosis, in 2 hours[22]. Unfortunately, it has not been used as widely as initially expected[23], mainly because the equipment costs$\$$17,000, not counting training and set-up costs[24]. This illustrates another major shortcoming of current diagnostic technologies: high healthcare infrastructure and cost requirements that render them inaccessible to many people. Widespread access to rapid diagnostics is not just about fairness, it’s a necessity. Antibiotic resistance will remain a problem in the US as long as it is a problem anywhere in the country or the world due to inevitable intra- and international bacterial transmission. Many recently developed rapid diagnostics cost \$$100-$\$$250 per test[25, 26]. These diagnostic innovations are promising and valuable in filling part of the gap in rapid diagnostics, but their benefits will not be felt by the majority of global hospitals and patients that cannot afford or support high cost, high tech diagnostic investments. Increasing institutional funding in the last 10 years has resulted in new rapid diagnostics for identifying and characterizing infections, a potential step towards reducing antibiotic misuse and subsequent development of antibiotic resistance. However, ensuring accessibility of technological improvements is essential in combating antibiotic resistance. Prescribing Practices: Updating prescription standards and educating healthcare workers and patients on responsible antibiotic use is another key step in reducing antibiotic overuse. In the US, patients are often prescribed antibiotics for far longer than necessary. Two recent studies found that 70% of patients with sinus infections and 70% of adults hospitalized with pneumonia were given antibiotics for 3 or more days longer than recommended[13, 14]. Oftentimes, this stems from an out-of-date belief that longer is better in terms of preventing the development and spread of resistant bacteria. In fact, the opposite is true. Shorter courses of antibiotics lower the selective pressure for development of resistance. This was illustrated in a study of pediatric antibiotic use[27], where children prescribed 5 days of amoxicillin for the treatment of respiratory infections were less likely to carry antibiotic resistant Streptococcus pneumoniae in their nasal passage than their peers who were treated for 10 days. These children were also found to be less likely to transmit resistant bacteria to others. In many cases, common antibiotic treatments can be shortened without affecting the outcome. A trial of pneumonia patients found that the standard 8-day course of amoxicillin can be shortened to just 3 days with equal symptom relief and fewer side effects[28]. Similarly, treatment of ventilator-associated pneumonia can be effectively shortened from 14 to 8 days[29]. In some cases, shortened antibiotic courses have actually improved patient outcomes. A reduced course for urinary tract infections from 14 days to 7 days is not only effective, it also prevents post-treatment yeast infections[30]. As scientists and clinicians become more aware of the dangers of resistance, more studies are being conducted to determine the minimum amount of antibiotic required to adequately treat infections. The Infectious Diseases Society of America has also updated their Clinical Practice Guidelines to reflect findings that shorter treatment schedules are often just as effective, are easier to comply with, and reduce development and spread of resistant bacteria[31]. Performing minimum effective antibiotic treatment trials is costly in the short term, but necessary to safely revise guidelines and save on long-term healthcare costs. Public misunderstanding and misinformation regarding antibiotics also contribute to their overprescription. In many clinical settings where antibiotics are not necessary, patients may believe antibiotics are the most effective treatment and push their doctors to inappropriately prescribe them. For example, patients often seek antibiotics for viral respiratory illnesses (i.e. cold and flu), despite antibiotics being ineffective against viral infections[5]. It has been demonstrated that patient expectation of antibiotics or physician perception of this desire have a significant influence on antibiotic prescription[32–34]. Table 1: Antibiotic overuse is caused largely by shortcomings in diagnostic technologies and prescribing practices, but there are many possible ways to address these challenges. Efforts to address this issue include educational initiatives for the public and antibiotic stewardship programs for healthcare providers. One such initiative was France’s national campaign to reduce antibiotic use, launched in 2001[35]. France, Europe’s largest antibiotics consumer, sought to address the problem through physician training and a public health campaign called “Antibiotics are not automatic”. This campaign spread public awareness that overusing antibiotics leads to resistance, and, during the winter flu season, that antibiotics kill bacteria – not the viruses responsible for most respiratory infections. Concurrently with this initiative, antibiotic use in France dropped by over 25% from 2000 to 2007, highlighting the ability of public health education to change clinical outcomes. In recent years, steps have been taken both in the US and internationally to encourage responsible antibiotic use via education, updated prescribing standards, and other courses of action. In 2016, the Joint Commission on Hospital Accreditation, an organization that accredits US healthcare organizations, mandated antibiotic stewardship programs in US hospitals that participate in Medicare and Medicaid. The Joint Commission issued standards cited from the CDC’s Core Elements of Hospital Antibiotic Stewardship Programs[36], including educating staff, healthcare practitioners, patients, and their families on responsible antibiotic use and resistance, appointing a pharmacist leaders to improve hospitals’ antibiotic use, tracking and reporting antibiotic prescribing and resistance patterns, and developing protocols for specific antibiotic use cases, such as pneumonia. The number of hospitals reporting an antibiotic stewardship program that meets all the CDC’s Core Elements doubled between 2014 and 2017[37], and will likely increase further, with stewardship programs now tied to accreditation. On an international scale, the UN and CDC have pushed for global implementation of One Health responses by releasing recommendations for engaging all members of society—governments, businesses, healthcare workers, etc.—in coordinated and strategic efforts to address antibiotic resistance[8]. Comprehensive promotion of responsible antibiotic use is vital to maintaining their usefulness for as long as possible, especially given the difficulty of developing new antibiotics. Revitalizing the Antibiotic Pipeline While it is important that existing antibiotics are prescribed cautiously and used responsibly, all antibiotics inevitably encounter resistance[38]. Consequently, continuously developing antibiotics with novel mechanisms of action—the method that an antibiotic uses to kill bacteria—that circumvent existing resistances will remain essential. However, developing these new drugs is costly; it can take well over a decade and cost more than \$$2 billion, with a 90% failure rate looming over the project[38]. Clinical trials, which require large, diverse populations to demonstrate evidence of drug superiority, account for 65% of the risk-adjusted cost for developing antibiotics[15]. The difficulty of antibiotic drug development is illustrated by the 2019 FDA approval of lefamulin, which marked the first approval of an IV/orally-administered antibiotic with a novel mechanism of action in two decades[39]. Scientific challenges inhibit discovery significantly. The immediately apparent antibiotic candidates have been developed, and discovering antibiotics with new mechanisms of action is challenging. It is now thought that any new, effective antibiotics will need multiple capabilities for killing bacteria, making their discovery more complex[3]. Emerging approaches in antibiotic discovery such as deep learning algorithms are promising technologies to solve these scientific challenges, but are far from bringing new antibiotics to patients[40]. In addition to scientific obstacles, the economics of antibiotic development have reduced innovation and output. The free market is failing to meet society’s antibiotic needs via multiple pathways[41]. Traditional sales-based models, in which revenue is directly proportional to the volume of sales, are antagonistic towards society’s goal of sustainable antibiotic use[2]. Evidence of the current system’s failure is the drastic decrease in antibiotic research programs[3] and the sparse output of new [2]. To address these challenges, policymakers are crucial actors; they can facilitate fertile economic conditions using a combination of 1) “push” policies to galvanize antibiotic discovery and development efforts and 2) “pull” policies to create profitable economic conditions, incentivizing industry to work in this area. Simultaneously, these policies must be supplemented by sufficient regulations to ensure sustainable and equitable usage, broadly maximizing overall societal benefits. Push Policies: Push policies drive companies to conduct antibiotic research and clinical trials[42] by providing monetary resources to antibiotic developers. Push policies are realized via grants and pipeline coordinators. Government grants allow both academia and industry to investigate antibiotic candidates and conduct clinical trials. Pipeline coordinators are agencies that ensure governmental funding is distributed efficiently across development stages. Coordinators are essential to ensuring equitable funding distribution across antibiotic candidates and identifying gaps and needs in the antibiotic pipeline from basic research through production. These vehicles have broad precedents and have demonstrated effectiveness at stimulating early stage scientific research. Current estimates show$\$$550 million is spent annually on push spending, though some recommendations show that this number should be \$$800 million to fully meet the demand for antibiotic research[42]. However, push policies and spending do not completely address the major economic issues.
Figure 3:A combination of push and pull policies are necessary to generate conditions to revitalize the antibiotic pipeline. Currently, only push policies are implemented. Pull policies can de-link an antibiotic’s development from its economic success, which is projected to increase the development rate of antibiotics that society needs.
Pull Policies: The primary goal of push policies is to jump start research and development in antibiotic discovery, but issues remain with the current market structure for antibiotics. This is illustrated by the fact that companies are failing after bringing important antibiotics to market. For instance, the biopharmaceutical company Achaogen successfully developed the antibiotic plazomicin in 2018, but filed for bankruptcy the following year due to insufficient profits from plazomicin[43]. Why would a company that successfully brings a new antibiotic to market fail? Antibiotics are generally prescribed for short periods of time (usually under two weeks), modern health policies support reducing or delaying the use of new antibiotics, and the market lifetime of antibiotics is reduced due to the inevitable development of resistance[44].Overall, these realities minimize sales of the new antibiotic and thus the profits of the developing company. In response, policymakers have proposed pull policies to de-link the sales of the new antibiotic to the economic reward given to the developers, improving the economic viability of developing new antibiotics. These pull policies are supported by the Infectious Diseases Society of America[45]. By de-linking sales from economic reward, the revenue from a new antibiotic is not purely based on the sales volume of that antibiotic. For example, a market entry reward (MER) — a large monetary sum given to developers of novel antibiotics upon successful drug approval — can be used to partially or fully de-link the number of sales from the economic reward. Multiple groups, such as the Boston Consulting Group, have estimated that a $\$\$1 billion MER per antibiotic is sufficient, suggesting that this award amount would lead to twenty novel antibiotics for society over the next three decades[42, 46].
An important supplement to any MER policy is the antibiotic susceptibility bonus (ASB)[47]. The ASB rewards companies that develop antibiotics that are effective over long periods of time. As an antibiotic remains effective against target bacteria, companies receive monetary awards. This policy helps better align all stakeholders’ (companies, patients, hospitals, insurance networks) interests towards generating and maintaining effective antibiotics. Companies will no longer have an incentive to oversell antibiotics, as they will receive more money the longer their drug is effective. This supplemental policy could safeguard MERs against abuse, and incentivize the development of antibiotics that act in society’s best interest: to develop effective treatments for long periods of time.
Another potential pull policy is the long-term supply continuity model (LSCM)[42], which addresses how companies respond once market exclusivity for a drug ends due to patent expiration. Suppliers may respond to loss of market exclusivity by either manufacturing fewer units in the case of a modest market or by increasing sales through marketing and promotion. Both actions are detrimental to public health in the case of an antibiotic, either promoting antibiotic overuse or making it harder for people who need the antibiotic to get it. The LSCM addresses this by having a country or group of countries make an agreement with manufacturers to produce a predetermined amount of the respective antibiotic for a specified price. This model to generate a predictable supply of an antibiotic acts as a pull mechanism by making the market for novel, essential antibiotics more sustainable for manufacturers.
Pull policies also have some downsides. For one, pull policies only reward successful antibiotic discovery campaigns; the inherent risk in developing these drugs may still dissuade companies. Also, while push policies have been validated with real world results, pull policies have not been evaluated as extensively. To encourage companies to work in this area, push policies, as well as pull policies, are needed to lower the risk of failed discovery programs. To develop the new drugs that society needs, companies need funding to start research and development and economic incentives to take the drugs to market.
Conclusion
Proliferation of antibiotic resistance in bacteria is a major public health problem that is only accelerating. This crisis is caused by overuse of existing antibiotic drugs and lagging development of new ones. To address the former, many US and international institutions are working to improve current diagnostic practices and adopt standards for responsible antibiotic use. Increasing funding for rapid diagnostics R&D, initiating educational programs, and mandating the adoption of comprehensive hospital antibiotic stewardship programs are
possible ways to reduce antibiotic overuse. To encourage the development of novel antibiotic drugs, many organizations have also subsidized research and development in this area. Additional incentives are being pursued to improve the commercial viability of antibiotic development and protect drug developers from the risks of the antibiotic market. Antibiotic
resistance is a major global health crisis, but with efforts to improve responsible use and end the almost 40-year drought of novel antibiotic drug discovery[48], we can take steps to prevent the next public health emergency.
## Acknowledgements
We thank Erika Madrian for her input in shaping the manuscript.
## Citation
Yee, E. H., Cheng, S. S., Knappe, G. A. & Moomau, C. A. Antibiotic resistance: How to prevent the next public health emergency. MIT Science Policy Review 1, 10-17 (2020).
## References
[1] Schanzenbach, D. W., Nunn, R. & Bauer, L. The changing landscape of America life expectancy. Tech. Rep., The Brookings Institution (2016).
[2] Harbarth, S., Theuretzbacher, U. & Hackett, J. Antibiotic research and development: business as usual? J. Antimicrob. Chemother. 70, 1604–1607 (2015). https://doi.org/10.1093/jac/dkv020.
[3] Silver, L. L. Challenges of antibacterial discovery. Clin. Microbiol. Rev. 24, 71–109 (2011). https://doi.org/10.1128/CMR.00030-10.
[4] Arnaud, C. H. Penicillin. Chem. Eng. News 83 (2008).
[5] CDC. Antibiotic resistance threats in the United States. Tech. Rep., Atlanta, GA: U.S. Department of Health and Human Services, CDC (2019). https://doi.org/10.15620/cdc:82532.
[6] Global action plan on antimicrobial resistance. Tech. Rep., The World Health Organization (2016).
[7] Kohanksi, M. A., Dwyer, D. J. & Collins, J. J. How antibiotics kill bacteria: From targets to networks. Nat. Rev. Microbiol. 8, 423–435 (2010). https://doi.org/10.1038/nrmicro2333.
[8] Schmehl, M. No time to wait: Securing the future from drug-resistant infections. DukeSciPol. (2019).
[9] Spellberg, B., Sharma, P. & Rex, J. H. The critical impact of time discounting on economic incentives to overcome the antibiotic market failure. Nat. Rev. Drug Discov. 11, 168 (2012). https://doi.org/10.1038/nrd3560-c1.
[10] Morel, C. M. & Edwards, S. E. Encouraging sustainable use of antibiotics: A commentary on the DRIVE-AB recommended innovation incentives. J. Law, Med. Ethics 46, 75–80 (2018). https://doi.org/10.1177/1073110518782918.
[11] Sprenger, M. Superbugs: The world is taking action, but low-income countries must not be left behind. Online: https://www.who.int/news-room/commentaries/detail/
superbugs-the-world-is-taking-action-but-lowincome-countries-must-not-be-left-behind (2017). Accessed: May 2020.
[12] Fleming-Dutra, K. E. et al. Prevalence of inappropriate antibiotic prescriptions among US ambulatory care visits, 2010-2011. J. Am. Med. Assoc. 315, 1864–1873 (2016). https://doi.org/10.1001/jama.2016.4151.
[13] King, L. M., Sanchez, G. V., Bartoces, M., Hicks, L. A. & Fleming-Dutra, K. E. Antibiotic therapy duration in US adults with sinusitis. JAMA Intern. Med. 178, 992–994 (2018). https://doi.org/10.1001/jamainternmed.2018.0407.
[14] Yi, S. H. et al. Duration of antibiotic use among adults with uncomplicated community-acquired pneumonia requiring hospitalization in the United States. Clin. Infect. Dis. 66, 1333–1341 (2018). https://doi.org/10.1093/cid/cix986.
[15] O’Neill, J. et al. Tackling drug-resistant infections globally: Final report and recommendations. Tech. Rep., Review on Antimicrobial Resistance (2016).
[16] CDC. What CDC is doing: Antibiotic resistance (AR) solutions initiative. Online: https://www.cdc.gov/drugresistance/solutions-initiative/index.html (2020). Accessed: May 2020.
[17] NIH-DPCSI. Antimicrobial resistance diagnostic challenge. Online: https://dpcpsi.nih.gov/AMRChallenge (2019). Accessed: May 2020.
[18] The fight against superbugs: Annual report 2018-2019. Tech. Rep., CARB-X (2019).
[19] Routh, J. NIH funds nine antimicrobial resistance diagnostics projects. Online: https://www.nih.gov/news-events/news-releases/nih-funds-nine-antimicrobialresistance-diagnostics-projects (2015). Accessed: May 2020.
[20] L’´toile, M. Biomérieux launches the Biofire® Filmarray® pneumonia panels with FDA clearance and CE marking. Online: https://www.businesswire.com/news/home/
20181112005811/en/bioM%C3%A9rieux-launchesBIOFIRE%C2%AE-FILMARRAY%C2%AE-Pneumonia-PanelsFDA (2018). Accessed: May 2020.
[21] Biofire® Filmarray® panels – comprehensive panels and better diagnostics. Online: https://www.biofiredx.com/ products/the-filmarray-panels/ (2020). Accessed: May 2020.
[22] A new tool to diagnose tuberculosis: The Xpert MTB/RIF assay. Tech. Rep., CDC NCHHSTP DTE (2013).
[23] Wejse, C. Xpert MTB/RIF is cost-effective, but less so than expected. Clin. Infect. Dis. 7, E692–E693 (2019). https://doi.org/10.1016/S2214-109X(19)30159-7.
[24] FIND. Negotiated prices. Online: https://www.finddx.org/pricing/genexpert/ (2019). Accessed: May 2020.
[25] Shuman, A. J. AI in pediatrics: Past, present, and future. Contemp. Pediatr. 36, 1–4 (2017).
[26] Paxton, A. New contender speeds ID with susceptibility testing. Online: https://www.captodayonline.com/newcontender-speeds-id-susceptibility-testing (2018). Accessed: May 2020.
[27] Schrag, S. J. et al. Effect of short-course, high-dose amoxicillin therapy on resistant pneumococcal carriage: A randomized trial. J. Am. Med. Assoc. 286, 49–56 (2001). https://doi.org/10.1001/jama.286.1.49.
[28] el Moussaoui, R. et al. Effectiveness of discontinuing antibiotic treatment after three days versus eight days in mild to moderate-severe community acquired pneumonia: randomised, double blind study. BMJ 332, 1335 (2006). https://doi.org/10.1136/bmj.332.7554.1355.
[29] Chastre, J. et al. Comparison of 8 vs 15 days of antibiotic therapy for ventilator- associated pneumonia in adults: A randomized trial. J. Am. Med. Assoc. 290, 2588–2598 (2003). https://doi.org/10.1001/jama.290.19.2588.
[30] Sandberg, T. et al. Ciprofloxacin for 7 days versus 14 days in women with acute pyelonephritis: a randomised, open-label and double-blind, placebo-controlled, non-inferiority trial. Lancet 380, 484–490 (2012). https://doi.org/10.1016/S0140-6736(12)60608-4.
[31] Chow, A. W. et al. Executive summary: IDSA clinical practice guideline for acute bacterial rhinosinusitis in children and adults. Clin. Infect. Dis. 54, 1041–1045 (2012). https://doi.org/10.1093/cid/cir1043.
[32] Al-Homaidan, H. T. & Barrimah, I. E. Physicians’ knowledge, expectations, and practice regarding antibiotic use in primary health care. Int. J. Health Sci. (Qassim) 12, 18–24 (2018).
[33] Stivers, T., Mangione-Smith, R., Elliot, M. N., McDonald, L. & Heritage, J. Why do physicians think parents expect antibiotics? What parents report vs what physicians believe. J. Fam. Pract. 52, 140–148 (2003).
[34] McKay, R., Mah, A., Law, M. R., McGrail, K. & Patrick, D. M. Systematic review of factors associated with antibiotic prescribing for respiratory tract infections. Antimicrob. Agents Chemother. 60, 4106–4118 (2016). https://doi.org/10.1128/AAC.00209-16.
[35] Sanbuncu, E. et al. Significant reduction of antibiotic use in the community after a nationwide campaign in France, 2002-2007. PLoS Med. 6, e1000084 (2009). https://doi.org/10.1371/journal.pmed.1000084.
[36] New antimicrobial stewardship standard. Tech. Rep., Joint Commission Perspectives (2016).
[37] Antibiotic prescribing and use in the U.S. Online: https://www.cdc.gov/antibiotic-use/stewardshipreport/index.html (2019). Accessed: May 2020.
[38] Sprenger, M. How to stop antibiotic resistance? Here’s a who prescription. Online: https://www.captodayonline.com/new-contender-speeds-id-susceptibility-testing (2015). Accessed: January 2020.
[39] Shor, E. & Nolen, R. FDA adds novel antimicrobials to arsenal. Online: https://www.pharmacytimes.com/publications/health-system-edition/2019/
[40] Stokes, J. M. et al. A deep learning approach to antibiotic discovery. Cell 180, 688–702 (2020). https://doi.org/10.1016/j.cell.2020.01.021.
[41] Laxminarayan, R. & Power, J. H. Antibacterial R&D incentives. Nat. Rev. Drug Discov. 10, 727–728 (2011). https://doi.org/10.1038/nrd3560.
[42] Årdal, C. et al. Revitalizing the antibiotic pipeline: Stimulating innovation while driving sustainable use and global access. Tech. Rep., Drive-AB (2018).
[43] McCoy, M. Antibiotic developer Achaogen files for bankruptcy. Chem. Eng. News 97 (2019).
[44] Batista, P. H. D., Byrski, D., Lamping, M. & Romandini, R. IP-based incentives against antimicrobial crisis: A European perspective. IIC Int. Rev. Intellect Prop. Compet. Law 50, 30–76 (2019). https://doi.org/10.1007/s40319-018-00782-w.
[45] Boucher, H. Testimony of the Infectious Diseases Society of America on U.S. biodefense, preparedness, and implications of antimicrobial resistance for national security. Tech. Rep., Infectious Disease Society of America (2019).
[46] Stern, S. et al. Breaking through the wall: A call for concerted action on antibiotics research and development. Tech. Rep., The Boston Consulting Group (2019).
[47] Morel, C. M. et al. Industry incentives and antibiotic resistance: An introduction to the antibiotic susceptibility bonus. J. Antibiot. (2020). https://doi.org/10.1038/s41429-020-0300-y.
[48] Jinks, T. Why is it so difficult to discover new antibiotics? Online: https://www.bbc.com/news/health-41693229 (2017). Accessed: January 2020.
##### Emma H. Yee
Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, MA
##### Steven S. Cheng
Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA
##### Grant A. Knappe
Department of Chemical Engineering, Massachusetts Institute of Technology, Cambridge, MA
Department of Biological Engineering, Massachusetts Institute of Technology, Cambridge, MA
##### Christine A. Moomau
Department of Biology, Massachusetts Institute of Technology, Cambridge, MA | 2021-05-19 00:33:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21856427192687988, "perplexity": 10438.292051126156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989874.84/warc/CC-MAIN-20210518222121-20210519012121-00413.warc.gz"} |
https://chem.libretexts.org/Bookshelves/General_Chemistry/Exercises%3A_General_Chemistry/Exercises%3A_Petrucci_et_al./13%3A_Solutions_and_their_Physical_Properties | # 13: Solutions and their Physical Properties
These are homework exercises to accompany the Textmap created for "General Chemistry: Principles and Modern Applications " by Petrucci et al. Complementary General Chemistry question banks can be found for other Textmaps and can be accessed here. In addition to these publicly available questions, access to private problems bank for use in exams and homework is available to faculty only on an individual basis; please contact Delmar Larsen for an account with access permission.
## Q13.1a
Which of the following do you except to be the least water soluble, and why? $$C_{10}H_{8(s)}$$, $$NH_2OH_{(s)}$$, $$C_6H_{6(l)}$$,$$CaCO_{3(s)}$$.
## Q13.1a
$$CaCO_3$$ least soluble because it does not interact with $$H_2O$$ and the ion charge is significantly high.
## Q13.1b
Which compound would be expected to readily dissolve in gasoline, and why?
CH3CH2OH(l), NH4 (aq), CH3(CH2)6COOH (s),BF3 (g)
## Q13.1b
We would expect the caprylic acid ($$CH3(CH_2)_6COOH$$)) to dissolve the easiest in gasoline because it is the only nonpolar molecule.
## Q13.2a
Which of the following is not moderately soluble both in water and in benzene ($$C_6H_{6(l)}$$), and why? (a) 1-butanol, $$CH_3(CH_2)_2CH_2OH$$; (b) naphthalene, $$C_{10}H_8$$; (c) hexane,$$C_6H_{14}$$ (d) $$NaCl_{(s)}$$
## Q13.2a
1. 1-butanol is soluble in water but not in benzene
2. naphthalene is soluble in benzene but not in water
3. hexane is soluble in benzene but not in water
4. NaCl is soluble in water because it’s a ionic solid and the hydration energy is greater compared with the energy needed to separate ions from the ionic.
## Q13.2b
What are some examples of heterogeneous and homogeneous mixtures?
## S13.2b
Homogenous Mixture are the mixtures which have their components uniformly spread throughout the solution.
• Examples: salt in water, sugar in water, 3 true solutions all are homogenous mixtures
Heterogeneous Mixture are the mixtures which have their components separately throughout the solution.
• Examples: sand in water, oil in water, all suspensions and colloidal solutions are heterogeneous mixtures
## Q13.3
Substances that dissolve in water generally do not dissolve in benzene. Some substances are moderately soluble in both solvents, however. Which of those are substances? (a) para-Dichlorobenzene (b) Salicyl alcohol (c) Diphenyl (d) Hydroxyacetic acid
## S13.3
1. no
2. yes because its OH and benzene ring
3. no
4. yes
## Q13.31
A solution of 430.0g C7H16, 600.0g C5H12 and 150.0g C9H20 is prepared. What is the a) Mass percent, and b) mole percent of each component in the solution?
## S13.31
a)
C7H16: (430.0gC7H16)/(600.0g+150.0g+430.0g) * 100%= 36.44%C7H16
C5H12: (600.0g C5H12)/(600.0g+150.0g+430.0g) * 100%= 50.85%C5H12
C9H20: (150.0g C9H20)/(600.0g+150.0g+430.0g) * 100%= 12.71%C9H20
b) 430.0g C7H16 * (1 moleC7H16)/(100.2g C7H16)= 4.29 mol C7H16
600.0g C5H12 * (1 moleC5H12)/(72.15gC5H12)= 8.32 mol C5H12
150.0g C9H20* (1 moleC9H20)/(128.26gC9H20)= 1.17 mol C9H20
(4.29 mol C7H16)/(4.29 mol C7H16+ 8.32 mol C5H12+1.17 mol C9H20)*100%=31.13%C7H16
(8.32 mol C5H12)/(4.29 mol C7H16+ 8.32 mol C5H12+1.17 mol C9H20)*100%=60.38%C5H12
(1.17 mol C9H20)/(4.29 mol C7H16+ 8.32 mol C5H12+1.17 mol C9H20)*100%=8.49%C9H20
## Q13.33a
Calculate the mole fraction of solute for these substances.
1. 1500 g H2O and 250 g NaCl
2. 230 g C2H5OH and 720 g H2O
## S13.33a
a)23g/mol+35.5g/mol=58.5g/mol
250g/58.5=4.27mol NaCl
1500g/18g/mol=83.3mol H2O
NaCl= 4.27/4.27+83.3=.05
b)230g/46g/mol=5molC2H5OH
720g/18g/mol=40molH2O
mol fraction C2H5OH=5/40+4=1/9=.11
## Q13.33b
Calculate the mole fraction of the solute in the following aqueous solutions: (a) 0.221M C6H1206 (d=3.20g/mL); (b) 5.1% ethanol, by volume (d=2.001g/mL); pure CH3CH2OH, d=0.989g/mL).
## Q13.33b
(a) Moles of C6H12O6= 1.00L*(0.221mol C6H12O6/1.00L)=0.221mol
C2H12O6
• Mass of solution=1000mL Soln*(3.20g Soln/1.0mL Soln)=3200g Soln
• Mass of C6H12O6=0.221mol C6H12O6*(180g C6H12O6/1mol C6H12O6)=39.78g
• Mass of H20=3200-39.78=3160.22g $$H_2O$$
• Moles of H20=3160.22g H20*(1mol H2O/18.02g H2O)=175.373mol $$H_2O$$
XC6H1206=0.221mol C6H12O6/(0.221mol+175.373mol)=.00126
(b) Mass of ethanol=5.1ethanol*(0.989g ethonal/1.0mLethonal)=5.044g
Ethonal
• Mass of Soln=100.0mL Soln*(2.001g Soln/1.0mL Soln)=200.1g
• Mass of H20=200.1-5.044=195.056g $$H_2O$$
• Mol C2H5OH=5.044*(1.0mol C2H5OH/46.07g C2H4OH)=0.109
• Mol H20=195.056*(1/18.02)=10.824
XC2H5OH=0.109mol/(0.109mol+10.824mol)=0.001
## Q13.35
What volume of glycerol, CH3CH(OH)CH2OH (d=3.02g/mL),must be added per kilogram of water to produce a solution with 5.50mol % glycerol?
## Q13.35
Nwater=1000g H20*(1mol H2O/18.02g H2O)=55.49 mol $$H_2O$$
Xgly=0.0550=Ngly/Ngly+55.49 Ngly=0.550 Ngly+3.05
Ngly=3.05/(1.0000-0.0550)=2.88mol glycerol
Volume glycerol=2.88mol C3H8O3*(92.03g C3H8O3/1mol
C3H8O3)*(1mL/3.02g)=87.8mL glycerol
## Q13.39a
Refer to Figure 13-8 and determine the molality of NH4Cl in a saturated aqueous solution at 50ºC.
## Q13.39a
According to Figure 13-4, at 50ºC, solubility is expressed by about 51g of NH4Cl per 100g H20.
Therefore, through stoichiometry, molality = 9.6
(51gNH4Cl/100gH20)/(1mol/53.49gNH4Cl)(1000g/kg)=9.625m
## Q13.39b
Refer to Figure 13-8 and determine the molality of $$NH_4Cl$$ in a saturated aqueous solution at 60 C.
Solution:
At 60C the solubility of $$NH_4Cl$$ is 56.3 g per 100 g of $$H_2O$$.
$Molality=(56.3\;g*\dfrac{(1\;mol\; NH_4Cl/53.49g NH_4Cl))}{(100\;g H_2O (1\;kg / 1000\;g)}=10.53\;m$
## Q13.41
A solution of 15.0g $$KClO_4$$ in 450 g of water is brought to a temperature of 40C
1. Refer to Figure 13-8 and determine whether the solution is unsaturated or supersaturated at 40C
2. Approximately what mass of $$KClO_4$$, in grams, must be added to saturate the solution (if originally unsaturated), or what mass of $$KClO_4$$ can be crystallized (if originally supersaturated)?
## S13.41
1. mass solute/100g H2O=100 g H2O*(15.0g KClO4/450.0g water=3.33g KClO4. At 40c a saturated KClO4 solution has a concentration of about 4.6 g KClO4 dissolved in 100g water. Thus the solution is unsaturated.
2. Mass of be added= (450g H2O*(4.6g KClO4/100g H2O)-15.0g KCIO4=20.7-15=5.7g KClO4
## Q13.43a
If the Henry’s law constant of Nitrogen gas dissolved in water at 35 ºC is 6.40x10^-4 (mol/L) /atm in 3 L of water, how many grams of nitrogen gas is there under a pressure of 2 atm at 25 ºC?
## Q13.43a
C=kp
C=(6.40 x 10-4 M/atm)(2atm)= .00128 M
(.00128 mol/L)(3L)= .00384 mol
.00384 mol (28.02 g/mol)= .11067 g N2.
## Q13.43b
Under a pressure of 1.00 atm, 43.25 mL of $$O_{2(g)}$$ dissolves in 2.3 L $$H_2O$$ at 25 C. What will be the molarity of $$O_2$$ in the saturated solution 25C when the $$O_2$$ pressure is 5.49 atm?
## Q13.43b
$Molarity=(0.04325\;L \;O_2*(1\;mol\; O_2/\;24.465\;L\;O_2))/2.3\;L=7.69 \times 10^{-4}M$
Henry’s law constant for $$O_2$$:
## S13.45
$Mass\; of CH_4=1.04 \times 10^3\;kg *(0.032\;g/1\;kg H_2O\; atm) (35 \;atm)=1164\;g CH_4$
## Q13.47
The aqueous solubility at 20C of Ar at 1 atm is equivalent to 53.2 mL $$Ar_{(g)}$$, measured at STP, per liter of water. What is the molarity of Ar in water that is saturated with air at 1 atm and 20C? Air contains 0.897% Ar by volume. Assume that the volume of water does not change when it becomes saturated with air.
## Q13.47
$K_{Ar}=C/P_{Ar}=((53.2mL Ar/1L soln)*(1 mol Ar/ 22,414 mL at STP))/2.4 atm pressure=0.0024\;M/atm$
$M=K_{Ar} \, P_{Ar}=(0.0024\;M/atm) (0.00897\;atm)=2.1 \times 10^{-5}M$
## Q13.49
Henry’s law can be stated this way: The mass of a gas dissolved by a given quantity of solvent at a fixed temperature is directly proportional to the pressure of the gas. Show how this statement is related to equation (13.2)
## Q13.49
While the gas dissolves in liquid, the solution remains essentially constant because of the low density of molecules in the gaseous state. Changes in the number of dissolved gas molecules causes the changes in the concentrations in the solution. The number is directly proportional to the mass of dissolved gas.
## Q13.51a
What is the vapor pressure (in mmHg) of a solution of 4.40 g of Br2 in 101.0 g of CCl4 at 300 K? The vapor pressure of pure bromine at 300 K is 30.5 kPa and the vapor pressure of CCl4 is 16.5 kPa.
## S13.51a
1) Calculate moles, then mole fraction of each substance:
Br2 ⇒ 4.40 g / 159.808 g/mol = 0.027533 mol
CCl4 ⇒ 101.0 g / 153.823 g/mol = 0.6566 mol
χBr2 ⇒ 0.027533 mol / 0.684133 mol = 0.040245
χCCl4 ⇒ 0.6566 mol / 0.684133 mol = 0.959755
2) Calculate total pressure:
Ptotal = P°Br2χBr2 + P°CCl4χCCl4
x = (30.5 kPa) (0.040245) + (16.5 kPa) (0.959755)
x = 1.2275 + 15.8360 = 17.0635 kPa
3) Convert to mmHg:
17.0635 kPa x (760.0 mmHg / 101.325 kPa) = 128 mmHg
## Q13.51b
What are the partial and total vapor pressures of a solution obtained by mixing 43.4 g benzene, $$C_6H_6$$, and 65.3 g toluene, $$C_6H_5CH_3$$, at 25C? The vapor pressure of $$C_6H_6$$ at 25C is 95.1 mmHg; the vapor pressure of $$C_6H_5CH_3$$ is 28.4 mmHg.
## Q13.51b
$N_1=(43.4\;\cancel{g}) \left(\dfrac{1\; mol}{78.11\; \cancel{g}}\right)=0.556\;mol\; C_7H_8$
$N_2=(65.3\;\cancel{g}) \left(\dfrac{1\; mol}{92.14\; \cancel{g}}\right)=0.709\;mol\; C_6H_6$
$X_1=\dfrac{0.556}{0.556+0.709}=0.440$
$X_2=\dfrac{0.709}{0.556 + 0.709}=0.560$
$P_1=(0.44)(95.1)=41.8\; mmHg$
$P_2=(0.56)(28.4)=15.9\; mmHg$
$P_{Total}=P_1+ P_2 = 41.8\;mmHg + 15.9\;mmHg =57.7\;mmHg$
## Q13.53
Calculate the vapor pressure at 25C of a solution containing 178g of the nonvolatile solute, glucose, $$C_6H_{12}O_6$$, in 967g $$H_2O$$. The vapor pressure of water at 25C is 23.8 mmHg.
## Q13.53
Nglucose=178g*(1mol/180.2g)=0.988mol
Nwater= 967g*(1mol/18.02g)=53.7mol
Xglucose=nglucose/(nglucose+nwater)=0.988/(0.988+53.7)=0.0181
PA^0-PA/PA^0=Xsolute
(23.8mmHg-PA)/23.8mmHg=0.0181
PA=23.4mmHg
## Q13.57
A benzene-toluene solution with Xbenz=0.308 has a normal boiling point of 98.6C. The vapor pressure of pure toluene at 98.6C is 533mmHg. What must be the vapor pressure of pure benzene at 98.6C?
## S13.57
Mole fraction of Toluene Xt=1-Xb=1-0.308=0.692
Ptoluene=XtoluenePtoluene=0.692*533mmHg=369mmHg
Partial pressure of benzene in solution=760mmHg-369mmHg=391mmHg
Partial pressure of benzene=mole fraction*vapor pressure
391mmHg=0.308*Pb
Pb=391/0.308=1269.5mmHg
## Q13.59a
A 0.63 g sample of polyvinyl chloride is dissolved is 324mL of a suitable solvent at 21C. The solution has an osmotic pressure of 1.67 mmHg. What is the molar mass of PVC?
## S13.59a
N/V=Pi/RT=(1.67mmHg*(1atm/760mmHg))/0.08206Latmmol^-1K^-1*294.2K)=9.10*10^-5M
Solute Amount=0.324L*(9.10*10^-5mol/1L)=2.95*10^-5mol M=0.63g/2.95mol=21355.9g/mol
## Q13.59b
What is the osmotic pressure of a 125mL solution containing 7.6g glucose at 37˚C?
## S13.59b
$π=iMRT$
i=1because glucose does not dissociate into ions.
M= (7.6gC6H12O6)(1mol/180.16g)= 0.042mol/(0.125L)=0.336mol/L
R=0.08206L*atm*mol-1*K-1
T=37˚C+273.15=310.15K
=(0.336mol/L)(0.08206L*atm*mol-1*K-1)(310.15K)=8.55atm
## Q13.61
The stem of cut flowers wilt when they are in NaCl(ap) concentration. Fresh cucumber shrivels are in similar situation. Explain the basis of these phenomena.
## S13.61
Both of them have ionic solutions, but the solutions are less concentrated than the salt solution. The solution in the plant material moves across the semi permeable to dilute the salt solution, leaving behind flowers wilt and pickles shrivel.
## Q13.63
In what volume of water must be 3mol of a nonelectrolyte be dissolved if the solution is to have an osmotic pressure of 4atm at 301K? Which of the gas laws does this result resemble?
Solution:
n/V=Pi/RT=(4atm/0.08206Latmmol^-1K^-1*301K)=0.162M
Volume=3mol*(1L/0.162mol solute/0=18.5L solvent
Osmotic Pressure Eq.
## Q13.65
At 25C a 0.71g sample of polyisobutylene in 200.0mL of benzene solution has an osmotic pressure that supports a 9.1 mm column of solution (d=0.88g/mL.) What is the molar mass of the polyisobutylene? (Hg, d=13.6g/mL)
Solution:
Pi=9.1mm soln*(0.88mmHHg/13.6mm soln)*(1atm/760mmHg)=7.7*10^-4atm
n/V=pi/RT=7.7*10^-4atm/0.08206Latmmol^-1K^-1*298K=3.2*10^-5M
Amount solute=200.0mL*(1L/1000mL)*(3.2*10^-5M)=6.3*10^-6mol solute
Molar mass=0.71g/6.3*10^-6mol=110000g/mol
## Q13.71
Determine the new freezing point of a solution made from 3kg of water and 2.5mol of CaCl2. Freezing point constant for water is -1.86ºC/m
## S13.72
∆T=imKf i=1
CaC2 → Ca2+ + 2Cl-
m=2.5mol/3kg=.833m
(3)(.833m)(-1.86ºC/m)=∆T=-4.65ºC
0-4.65ºC=-4.65ºC
## Q13.75
Triophene (fp=-38.3;bp=84.4C) is a sulfur-containing hydrocarbon sometimes used as a solvent in place of benzene. Combustion of a 2.348g sample of thiophene produces 5.012g CO2, 1.150g $$H_2O$$, and 1.788 g SO2. When a 0.867g thiophene is dissolved in 44.56g of benzene, the freezing point is lowered by 1.183C. What is the molecular formula of thiophene?
## Q13.75
M=changeTf=-1.183C/-5.12C/m=0.131m
Amount of solute=0.04456 kg benzene*(0.231mol solute.1kg benzene)=0.0103mol solute
Molar mass=0.867g thiophene/0.0103mol thiophene=84.3g/mol
C=5.012g*(1mol CO2/44.010g)*(1mol C/1mol CO2)=0.1134mol/0.02791=4.08mol C
H=1.150g*(1mol H2O/18.015g H2O)*(2mol H/1mol H2O)=0.1277mol H/0.02791=4.575mol H
S=1.788g*(1mol SO2/64.065g SO4)*(1mol S/1mol SO4)=0.02791mol S/0.02791=1.000mol S
## Q13.77
Cooks often add some salt to water before boiling it. Some people say this helps the cooking process by raising the boiling point of the water. Others say not enough salt is usually added to make any noticeable difference. Approximately how many grams of NaCl must be added to a liter of water at 1 atm pressure to raise the boiling point by 3.4C? Is this a typical amount of salt that you might add to cooking water?
## S13.77
M=change/iK=3.4C/2*0.512C/m=3.3m
Solute mass=1.00l H20*(1kg H2O/1L H2O)*(3.3mol NaCl/1kg H2O)*(58.4g NaCl/1mol NaCl)=190g NaCl
This is at least ten times the amount of salt in a liter of water.
## Q13.81
Predict the approximate freezing points of 0.10 m solutions of the following solutes dissolved in water:
1. CO(NH2)2;
2. NH4NO3;
3. HCl;
4. CaCl2;
5. MgSO4;
6. C2H5OH;
7. HC2H3O2
## S13.81
ΔT = Kf * m * i
a) ΔT = (1.86 °C/m)*(.2 m)*(1)=0.372
freezing point = 0ºC-0.372ºC = 0.372ºC
b) ΔT = (1.86 °C/m)*(.2 m)*(2)=0.744
freezing point = 0ºC-0.744ºC = 0.744ºC
c) ΔT = (1.86 °C/m)*(.2 m)*(2)=0.744
freezing point = 0ºC-0.744ºC = 0.744ºC
d) ΔT = (1.86 °C/m)*(.2 m)*(3)=1.116
freezing point = 0ºC-1.116ºC = 1.116ºC
e) ΔT = (1.86 °C/m)*(.2 m)*(2)=0.744
freezing point = 0ºC-0.744ºC = 0.744ºC
f) ΔT = (1.86 °C/m)*(.2 m)*(1)=0.372
freezing point = 0ºC-0.372ºC = 0.372ºC
g) ΔT = (1.86 °C/m)*(.2 m)*(1)=0.372
freezing point = 0ºC-0.372ºC = 0.372ºC
## Q13.83
NH3(aq) conducts weakly electric current. The same is true for acetic acid, HC2H3O2(aq). When the solutions go together, the resulting solution conducts very well electric current. Why?
## S13.83
NH3 with HC2H3O2=NH4C2H3O2, solution of ions NH4+ and CH3COO-
NH3(aq)+HC2H3O(aq)àNH4C2H3O2(aq)
NH4C2H3O2(aq)àNH4+(aq)+C2H3O2-(aq)
This is a strong electrolytes that conducts strong currents
## Q13.87
A typical root beer contains 0.11% of a 72% $$H_3PO_4$$ solution by mass. How many milligrams of phosphorus are contained in a 13 oz (29.6 mL) can of this root beer? Solution density is $$\rho=1.00 \;g/mL$$.
## S13.87
Mass of root beer=13oz*(29.6mL/1oz)*(1.00g/1.00mL)=384.8g
Mass of 72%H3PO4 solution=(0.13/100)*384.8g= 0.500g
Mass of H3PO4=(72/100)*(0.500g)=0.360g
Mass % phosphorus in phosphoric acid=(mass of phosphorus/mass of phosphoric acid)*100=(30.974g/98.00g)*100=31.61
Mass of Phosphorus=(31.61/100)*0.360g=0.114g
## Q13.88
An aqueous solution 113.1g KOH/L solution. The solution density is 1.11 g/mL. Your task is to use 100.0mL of this solution to 0.25m KOH. What mass of which component, KOH or $$H_2O$$, would you add to the 100.0mL of solution?
Solution:
KOH molarity=(113.1g KOH*(1mol KOH/56.010g KOH))/1L soln=2.019M
$$H_2O$$ in final soln=0.1000L orig.soln*(2.019mol KOH/1L soln)*(1kg H20/0.250mol KOH)=0.810kg $$H_2O$$
Mass of original solution 100.0mL*1.11g/mL=111g original solution
Mass KOH=100.0mL*(1L/1000mL)*(113.1g KOH/1Lsoln)=11.31g KOH
Original mass of water=113.1g soln-11.31g KOH=101.79g $$H_2O$$
Mass added H2O=810g H2O-101.79g H2O=678.21g $$H_2O$$
## Q13.103
Suppose that 1.15mg of gold is obtained in a colloidal dispersion in which the gold particles are spherical, with a radius 1.00*10^2nm. Density=18.23g/cm^3) (a) What is the total surface area of the particles? (b) What is the surface area of a single cube of gold of mass 3.07mg?
## S13.103
(a)Surface area=4pi^2=4(3.1416)(1*10^-7m)^2=1.26*10^-13m^3
Particle volume= 4pir^3/3=4pi(1*10^-7)^3/3=4.19*10^-21m^3
Particle mass=DV=(18.23g/cm^3)*(100cm)^3/(1m)^3*4.19*10^-21m^3=7.64*10^-14g/particle
Number of Au particle=mass of Au/particle mass=(1.00*10^-3g Au)/7.64*10^-14g/particle=1.309*10^10 particle
Total surface area=(1.26*10^-13m^2/particle)(1.309*10^10particle)=0.00162m^2
(b)Au: (3.07mg)(10^-3g/mg)/(18.23g/cm^3)=1.68*10^-4cm^3
L=(1.68*10^-4cm^3)^(1/3)=0.0552cm
Area=6*L^2=6*(0.0552cm)^2*(1m/100cm)^2=1.83*10^-8cm^2
## Q13.113
What volume of ethylene glycol ($$HOCH_2CH_2OH$$) with density, $$\rho=2.21\; gm/L$$, must be added to 21.12 L of water ($$K_f=1.91 \;C/m$$) to produce a solution that freezes at -15 C?
Solution:
$\Delta T_f=-k_f m$
$T-T_f=-k_f m$
-15C-0.00C=-1.91Cm^-1*m
molality=-15C/-1.91Cm^-1=7.85m
mol HOCH_2CH_2OH =(7.85mol/1kg H20)*(1kg H2O/1L H2O)*21.12L=165.86mol
Volume of ethylene glycol=165.86 mol C2H6O2*(62.09g C2H6H02/1mol C2H6H02)*(1mL/2.21g)*(1L/10^3mL)=4.66 L
## Q13.117
What are the following terms or symbols: (a) Xb; (b) PA^0 (c) Kf (d) i
## S13.117
1. mole fraction of liquid B. Number of moles of B to the total number of moles of the solution
2. vapor pressure of pure solvent at the given temp $P_A^0=\dfrac{P_A}{X_A}$
3. molal depression constant or cryoscopic constant
4. ratio of measured vaule of a colligate property to the excepted value of the solute in a non-electrolyte $i=\dfrac{changeT_f}{excepted change T_f}$
1) _____________ ( CHCl2(l)/BF3/CCl4(l)(choose one) is expected to be the most water soluble because _________________.
2) _____________ (C6H14/C6H5OH/CCl4/C10H8) (choose one) is moderately soluble in both water and benzene (C6H3Cl) because ________________.
3) Most substances that are soluble in water aren’t soluble in benzene (and vice-versa). However, there are substances that are moderately soluble in both water and benzene, _________________,(CH4 / Ch3(CH2)2CH2OH) (choose one) is an example of this.
33) What is the mol fraction of the solute in the following?
a)a solution prepared by mxing 2.17 moles of C7H16, 1.5 moles C8H18, and 2,7 moles C9H20
35) Your friend wants to produce a solution that is 7.23% C3H8O2, what volume of, C3H8O2, would you suggest your friend to add per kilogram of water to achieve this?
39) if a 100ml sample of water at 293 K contains 13 ppm of aluminum, then how many aluminum ions are present in the solution and what is the molality of the solution?
41) Consider a solution consisting of 32g of KClO4 and 500 g of water that is heated to a temperature of 313.15K.
a) Determine whether the solution is saturated, unsaturated, or supersaturated at 313.15 K.
43) 28.31 mL of CO2(g) dissolves in 1.0L water at 25˚C at a CO2 pressure of 1 atm, if the pressure rose to 4.2 atm (all other variables held constant) what will the molarity of CO2 in the saturated solution be?
45) What mass of natural gas will dissolve if a sample of natural gas is under a pressure of 17 atm is kept in contact with 1000 kg of water? Consider that the solubility of natural gas at 293 K and 1.0 atm is about 0.037g/Kg water.
47) Neon (Ne) at 1 atm has an aqueous solubility equal to 25.9 ml Ne(g) measured at STP/L water. Determine molarity of Ne in water saturated with air at 1.0 atm and 293 K. (air has .0015% Ne by volume.)
49) Using Henry’s law, describe why the pressure at a fixed temp can increase.
51) If one were to mix 40.3 g of C6H6 and 53.5 g of C6H5CH3 at 298 K then what will be the corresponding partial pressures? Total pressure? (P˚C6H6 =95.1 torr, P˚C6H5CH3= 28.4 torr).
53) For a solution of .76 mol of NaCl in 690 g H2O what will be the vapor pressure at 298K? (P˚water= 23.8mmHg @ 25˚C)
55) The products of a reaction are 27% C6H5CH=CH2 and 73% C6H5CH2CH3 (by mass). Given this, if the mixture were to be separated via fractional distillation at 363K, what would be the vapor pressures at equilibrium (P˚C6H5CH=CH2= 134mmHg, P˚C6H5CH2CH3= 182mmHg)?
57) Given a solution that is 40% benzene and 60% toluene with a boiling point of 371.6 K, what is P˚benz at 371.6K? (P˚toluene= 533mmHg @371.6K)
59) if a .58 g sample of CO2 is dissolved in 250 ml of an appropriate solvent at 298K and the solution has an osmotic pressure of 2 mmHg then what is the molar mass of CO2?
61) Describe what occurs in terms of osmotic pressure when a cucumber is placed into a solution of highly concentrated salt and shrivels up.
63) what will be the osmotic pressure of a 2 M aqueous solution at 293 K?
65) If an aqueous solution has 0.97 g/L of an organic solution then the osmotic pressure of the solution will be 62.9 torr, at 298K. What is the molar mass (in g/mol) of this solution?
75) a compound is composed of approximately 40.3% B, 52.2% N, 7.5% H by mass. When 2.8867g is dissolven in 50 g benzene the solution freezes at 1.3˚C, what is the molecular formula of the compound> (Fp pure benz=5.48˚c ; Kb benz=5.12˚C/k)(density benz=.879 g/ml).
77) what amount of NaCl must be added to a 2.37 sample of H2O at a pressure of 1 atm would be needed in order to ingrease the boiling point by 2˚C?
83) predict the freezing points of the following .25 m solutions when dissolved in water (∆Tf=-ikfm Kf water= 1.84˚C/m
a) Co(NH2)2
b) NH4NO3
c) HCl
d) CaCL2
87) Consider a solution (A) that has .617 g CO(NH2)2 dissolved in 90 g of water; and solution B has 3.7 g of C12H22O11 in 80 g of water. What would be the compositions of each of these at equilibrium?
88) Given two isomers with differing freezing points, boiling points, and densities what methods could be instituted in order to separate the two if they are mixed in solution?
113) what will be the resulting vapor pressure when 58.9 g C6H14 is introduced into a container with 44 g of C6H6 at 332K if P˚ C6H14= 573mmHg and P˚ C6H6= 391
## S13.1
CH2Cl2(l) is the most water soluble because water is nonpolar and of the molecules given CH2Cl2(l) is the least polar molecule.
2) You need a polar molecule that is not ionic, phenol, C6H5OH, is the only choice that makes sense.
3) Ch3(CH2)2CH2OH because the butyl chain is nonpolar while the OH groups are polar.
## S13.33
2.17 moles of C7H16, 1.5 moles C8H18, and 2,7 moles C9H20
moltot=6.37
mol fracions:
C7H16=2.17/6.37= .34
C8H18=1.5/6.37= .235
C9H20= 2.7/6.37= .424
## S13.35
mols H2O= 1000g(1 mol/(18.02g/mol))=55.49 mols H2O
X (C3H8O2)=7.23%=.0723=n(C3H8O2)/(n(C3H8O2)+nH2O)
n(C3H8O2)=.073(n(C3H8O2))+4.05=4.37 mol (C3H8O2)
4.37 mol (C3H8O2)*(92.09g/mol)*(1ml/1.26g)=
.319.39 ml (C3H8O2) needed to produce the solution
## S13.39
a)
Al=101.996 g/mol
mol Al= (13ug/100g H2O)*(1g/106ug)*(1mol Al/101.96gAl)=1.275E-9 mols Al
# Al atoms= (1.275E-9)(6.022E23)=7.68E14 atoms
b)
molality:
m=1.275E-9/(100gH2O*(1kg/1000g))=1.27E-8 m
41) [KClO4]= 100g H2O * (32g/500g H2O)= 6.4 g KCLO4
At 313.15K, a saturated solution has a concentration of about 4.6g in 100g H2O. Based on this the solution is supersaturated.
43) 28.31 mL of CO2(g) dissolves in 1.0L water at 25˚C at a CO2 pressure of 1 atm, if the pressure rose to 4.2 atm (all other variables held constant) what will the molarity of CO2 in the saturated solution be?
PV=nRT=> n=PV/RT=(1atm*.02831 L)/(.08206*298K)= .001158 mol CO2
[CO2]= .001158mol/1.0L soln= .001158 M
concentration at higher pressure:
[CO2]1=.001158*4.2atm= .00486 M
45)
mass of natural gas= 1000 kg H2O*(.037g/kg)*17 atm = 629 g natural gas dissolved.
47)
henry’s law: C=kP
k=C1/P1=C2/P2
kNe=C/PNe=((25.9ml/L)(1 mol Ne/22414 ml at STP))/1atm= .001155M/atm
C=KNePNe= (.001155M/atm)(.000015)=1.73E-8 M Ne
49) Due to the low density of gas molecules, the volume of a solution will remain the same as a gas is dissolved in the liquid. When this happens the concentration of gas increases, this is proportional to the mass of gas that is dissolved and will cause an increase in pressure.
51)
nC6H6= 40.3 g * (1 mol/78.11g)= .516 mols
nC6H5CH3= 53.5 g * (1 mol/ 92.14 g) = .580 mols
XC6H6= .516/(.516+.580)= .4708
XC6H5CH3= .580/(.516+.580)= .5292
PC6H6= .4708*95.1 torr= 44.77 torr= 44.77 mmHg
PC6H5CH3= .5292*28.4 torr= 15.029 torr= 15.029 mmHg
Ptot= 44.7 + 15.029 =59.799 mmHg
53)
nNaCl=.76 mols (given)
nH2O= 690 g/(18.02 g/mol) = 38.29 mols
Xwater=(38.29/(38.29+.76))=.98
Psoln= XwaterPwater =(.98)(23.8mmHg)= 23.324 mmHg
55)
(27g(C6H5CH=CH2)/(104g/mol))= .26 mols
(73g(C6H5CH2CH3)/(106g/mol))= .69 mols
X(C6H5CH=CH2)= .26/(.26+.69) = .27
P(C6H5CH=CH2)= (.27)(134mmHg)= 36.18 mmHg
X(C6H5CH2CH3)= .69/(.26+.69)= .73
P(C6H5CH2CH3)= (.73)(182mmHg)= 132.86mmHg
57)
Ptoluene= Xtoluene*P˚toluene= (.6)(533mmHg)= 319.8 mmHg
Pbenzerne= Ptot - Ptoluene= 760-319.8=- 440.2 mmHg
440.2mmHg= Xbenzbenz
440.2/.4= P˚benz= 1100.5mmHg
59)
∏=MRT
n/v = π/RT = ((2mmHg)(1atm/760mmHg))/((.08206)(298K))= 1.076E-4M
solute amount= .25 L * (1.076E-4M/1L)= 2.69E-5 mols
molar mass = .58g/2.69E-5 mol = 2.156E4 g/mol
61) The solutions inside of the plants are less concentrated than the concentrated solution outside and so the plant solution will move, or cross he membrane in order to dilute the salt solution.
63)
∏=(2M)(.08206Latm/molK)(293K)
∏=48 atm
65)
62.9 torr / 760 = .0828atm
(assume mass of .97g)
.0828atm = (.97g/Molar Mass) x .0821 L*atm/mol*K x 298K
Molar mass= 286.7 g/mol
75)
m=∆Tf/-Kf= 1.3-5.48/-5.12= .816 m
amount= (50 ml benz*(.879g/ml)*(1kg/1000g)*(.816 mol/kg benz)= .03586 mol
molec weight = 2.8867 g/ .03586 mol = 80.5011 g/ mol
1 mol B= 40.3g/(1.811g/mol)=3.72 mol B/3.72
1 mol N=52.2g/(14.0067 g/mol)= 3.72 mol N/3.72
2 mol H= 7.5 /(1.008 g/mol)= 7.44 mol H/3.72
BNH2= 26.8337 80/26.8337=3
B3N3H6
77) ∆Tb=2˚C Kbwater = .512 ˚C/m mass H2O = 2 kg i=2
m= (∆Tb/iKb) = 2/ (2*.512) =2 m
solute mass = 2.37 L watee*2 mols NaCl/kg water*58.4 g/mol NaCl = 276.816 g NaCl
83)
a) Co(NH2)2
a. Tf=/(1)(1.86)(.25)= -.465
b) NH4NO3
a. Tf= -(2)(1.86)(.25)= -.93
c) HCl
a. Tf=-(2)(1.86)(.25)= -.93
d) CaCL2
a. Tf= -(3)(1.86)(.25)= 1.395
87)
n(CO(NH2)2= .617 g / 60.06 g/mol= .0103 mol
n H20 + CO(NH2)2 = 90 g / 18.02 g/ mol = 4.99 mol
n C12H22O11= 3.7g/ 342.3 =.0108 mol
n H20 + 80 g h20 /18.02 = 4.44 mol H2O
.0103/.0103+nwater = XCO(NH2)2 = X C12H22O11= .0108/(.0108+(9.85-nwater)
nwater= 4.8083
.0103/(.0103+nwater)= XCO(NH2)2 = X C12H22O11= .00214
88) separation by fractional solidification.
113)
(58.9 g/(86g/mol)= .68 mol C6H1
44g/78 = .56 mol C6H6
1.24 mol tot
XH= .44
XB= .45
Ptot = .55*573 + .43*391 = 491 mmHg
## Q13.1
Which of the following do you expect to be most water soluble, and why? C10H8(s),NH2OH(s),C6H6(l),CaCO3(s)
## Q13.2
Which of the following is moderately soluble both in water and in benzene [C6H6 (l)], and why? (a) 1-butanol, CH3(CH2)2CH2OH; (b) naphthalene, C10H8; (c) hexane, C4H14; (d) NaCl (s).
## Q13.3
Substances that dissolve in water generally do not soluble in benzene. Some substances are moderately soluble in both solvents, however. One of the following is such a substance. Which do you think it is and why?
## Q13.33
Calculate the mole fraction of the solute in the following aqueous solutions: (a) 0.112M C6H12O6 (d=1.006g/ml); (b) 3.20% ethanol, by volume (d=0.993 g/ml; pure CH3CH2OH, d=0.789g/ml).
35. What volume of glycerol, CH3CH(OH)CH2OH (d=1.26g/ml), must be added per kilogram of water to produce a solution with 4.85 mol % glycerol?
39. Refer to Figure 13-8 and determine the molarity of NH4Cl in a saturated aqueous solution at 40°C.
41. A solution of 20.0kg KClO4 in 500.0 g of water is brought to a temperature of 40°C. (a) Refer to figure 13-8 and determine whether the solution is unsaturated or supersaturated at 40°C. (b) Approximately what mass of KClO4, in grams, must be added to saturated the solution (if originally)
43. Under an O2 (g) pressure of 10.00 atm, 28.31 ml of O2 (g) dissolves in 1.00 L H2O at 25°C. What will be the molarity of O2 in the O2 in the saturated solution at 25°C when O2 pressure is 3.86 atm? (Assume that the solution volume remains at 1.00L)?
45. Natural gas consists of about 90% methane, CH4. Assume that the solubility of natural gas at 20°C and 1 atm gas pressure is about the same as that of CH4, 0.02g/kg water. If a sample of natural gas under a pressure of 20 atm is kept in contact with 1.00*103 kg of water, what mass of natural gas will dissolve?
47. The aqueous solubility at 20°C of Ar at 1 atm is equivalent to 33.7 ml Ar (g), measured at STP, per liter of water. What is the molarity of Ar in water that is saturated with air at 1 atm and 20°C? Air contains 0.934% Ar by volume. Assume that the volume of water does not change when it becomes saturated with air.
49. Henry s law can be stated this way: The mass of a gas dissolved by a given quantity of solvent at a fixed temperature is directly proportional to the pressure of the gas. Show how this statement is related to equation (13.2).
51. What are the partial and total vapor pressures of a solution obtained by mixing 35.8 g benzene, C6H6, and 56.7 g toluene, C6H5CH3, at 25 °C? At 25 °C, the vapor pressure of C6H6 = 95.1 mmHg; the vapor pressure of C6H5CH3 = 28.4 mmHg.
53. Calculate the vapor pressure at 25 °C of a solution containing 165 g of the nonvolatile solute, glucose, C6H12O6, in 685 g H2O. The vapor pressure of water at 25 °C is 23.8 mmHg.
55. Styrene, used in the manufacture of polystyrene plastics, is made by the extraction of hydrogen atoms from ethylbenzene. The product obtained contains about 38% styrene (C6H5CH = CH2) and 62% ethylbenzene (C6H5CH2CH3), by mass. The mixture is separated by fractional distillation at 90 °C. Determine the composition of the vapor in equilibrium with this 38% 62% mixture at 90 °C. The vapor pressure of ethylbenzeneis 182 mmHg and that of styrene is 134 mmHg.
57. A benzene-toluene solution with xbenz = 0.300 has a normal boiling point of 98.6 °C. The vapor pressure of pure toluene at 98.6 °C is 533 mmHg. What must be the vapor pressure of pure benzene at 98.6 °C? (Assume ideal solution behavior.)
59. A 0.72 g sample of polyvinyl chloride (PVC) is dissolved in 250.0 mL of a suitable solvent at 25 °C. The solution has an osmotic pressure of 1.67 mmHg. What is the molar mass of the PVC?
61. When the stems of cut flowers are held in concentrated NaCl (aq), the flowers wilt. In a similar solution a fresh cucumber shrivels up (becomes pickled). Explain the basis of these phenomena.
63. In what volume of water must 1 mol of a nonelectrolyte be dissolved if the solution is to have an osmotic pressure of 1 atm at 273 K? Which of the gas laws does this result resemble?
65. At 25 °C a 0.50 g sample of polyisobutylene (a polymer used in synthetic rubber) in 100.0 mL of benzene solution has an osmotic pressure that supports a 5.1 mm column of solution (d = 0.88 g/mL). What is the molar mass of the polyisobutylene? (For Hg, d=13.6 g/ml.)
75. Thiophene (fp = - 38.3; bp = 84.4 °C) is a sulfur containing hydrocarbon sometimes used as a solvent in place of benzene. Combustion of a 2.348 g sample of thiophene produces 4.913 g CO2, 1.005 g H2O, and 1.788 g SO2. When a 0.867 g sample of thiophene is dissolved in 44.56 g of benzene (C6H6), the freezing point is lowered by 1.183 °C. What is the molecular formula of thiophene?
77. Cooks often add some salt to water before boiling it. Some people say this helps the cooking process by raising the boiling point of the water. Others say not enough salt is usually added to make any noticeable difference. Approximately how many grams of NaCl must be added to a liter of water at 1 atm pressure to raise the boiling point by 2 °C? Is this a typical amount of salt that you might add to cooking water?
83. NH3 (aq) conducts electric current only weakly. The same is true for acetic acid, HC2H3O2(aq).When these solutions are mixed, however, the resulting solution conducts electric current very well. Propose an explanation.
87. A typical root beer contains 0.13% of a 75% solution by mass. How many milligrams of phosphorus are contained in a 12oz can of this root beer? Assume a solution density of 1.00 g/mL; also, 1oz = 29.6mL.
88. An aqueous solution has 109.2 g KOH/L solution. The solution density is 1.09 g/ml. Your task is to use 100.0 mL of this solution to prepare 0.250 m KOH. What mass of which component, KOH or H2O, would you add to the 100.0 mL of solution?
103. Instructions on a container of antifreeze (ethyleneglycol; fp, - 12.6 °C, bp, 197.3 °C) give the following volumes of Prestone to be used in protecting a 12 qt cooling system against freeze-up at different temperatures (the remaining liquid is water): 10 °F, 3 qt; 0 °F, 4 qt; - 15 °F, 5 qt; - 34 °F, 6 qt. Since the freezing point of the coolant is successively lowered by using more antifreeze, why not use even more than 6 qt of antifreeze (and proportionately less water) to ensure the maximum protection against freezing?
113. Cinnamaldehyde is the chief constituent of cinnamon oil, which is obtained from the twigs and leaves of cinnamon trees grown in tropical regions. Cinnamon oil is used in the manufacture of food flavorings, perfumes, and cosmetics. The normal boiling point of cinnamaldehyde, C6H5CH = CHCHO, is 246.0 °C, but at this temperature it begins to decompose. As a result, cinnamaldehyde cannot be easily purified by ordinary distillation. A method that can be used instead is steam distillation. A heterogeneous mixture of cinnamaldehyde and water is heated until the sum of the vapor pressures of the two liquids is equal to barometric pressure. At this point, the temperature remains constant as the liquids vaporize. The mixed vapor condenses to produce two immiscible liquids; one liquid is essentially pure water and the other, pure cinnamaldehyde. The following vapor pressures of cinnamaldehyde are given: 1 mmHg at 76.1 °C; 5 mmHg at 105.8 °C; and 10 mmHg at 120.0 °C. Vapor pressures of water are given in Table 13.2.(a) What is the approximate temperature at which the steam distillation occurs?(b) The proportions of the two liquids condensed from the vapor is independent of the composition of the boiling mixture, as long as both liquids are pre- sent in the boiling mixture. Explain why this is so. (c) Which of the two liquids, water or cinnamaldehyde, condenses in the greater quantity, by mass? Explain.
117. In your own words, define or explain the following terms or symbols: (a) xB; (b) PA°; (c) Kf; (d) i; (e) activity.
1. NH2(OH) (s)
2. NaCl. The attractions between unlike molecules are much weaker and the components remain segregated on a heterogeneous mixture.
3. C
33. (a). 0.00204 (b) 0.0101
35. 207ml
39. 8.66m
41. (a) Unsaturated. (b) 3g
43. 4.47*10-3M
45. 400g
47. 1.4*10-5M
49. Because of the low density of molecules in gaseous state, the solution volume remains essentially constant as a gas dissolves in liquid.
51. 56.9mmHg
53. 23.2mmHg
55. 0.32
57. 1290mmHg
59. 3.2*104g/mol
61. Both two are less concentrated then the salt solution.
63. 22.4L
65. 2.8*105g/mol
75. C4H4S
77. 120g
83. NH3(aq) +HC2H3O(aq)àNH4C2H3O2(aq)
NH4C2H3O2(aq) àNH4+(aq)+C2H3O2-(aq)
87. 0.462g
88. 682g H2O
103. 113
117. (a) Mole fraction
(b) Vapor pressure
(c) Depression constant
(d) Van’s Hoff introduced factor
3.) Substances that dissolve in water generally do not dissolve in benzene. Some substances are moderately soluble in both solvents, however. One of the following is such a substance. Which do you think it is and why?
Salicyl Alcohol Hydrochloric Acid Oxyacetic Acid
Answer: Salicyl Alcohol because of its OH groups and the benzene ring
http://chemwiki.ucdavis.edu/Analytical_Chemistry/Chemical_Reactions/Properties_of_Matter/Solubility_Rules
43.) Under an O2(g) pressure of 1.00 atm, .03522 L of O2(g) dissolves in 1 L of water at 25°C. What will be the molarity of O2 in the saturated solution at 25°C when the O2 pressure is 4.88 atm? (Assume that the solution volume remains at 1 L).
Answer: n = PV/RT =( 1)(.03522)/(0.0821)(298) = .0014 mol
So molarity = .0014 mol/1.0 L = .0014 M
C = KP, so K = C/P
k = .0014 M/ 1 atm
When O2 pressure is 4.88 atm…
.0014 = C/(4.88)
C (concentration of O2) = .0068 M
http://chemwiki.ucdavis.edu/Physical_Chemistry/Physical_Properties_of_Matter/Solutions/Solubilty/Types_of_Saturation
53.) Calculate the vapor pressure at 25°C of a solution containing 200 g of the nonvolatile solute, glucose, C6H12O6, in 700 g of water. The vapor pressure of water at 25°C is 23.8 mmHg.
Answer: Raoult’s Law: PA = (XA)(P°A)
200 g C6H12O6 = 1.11 mol
700 g H2O = 38.89 mol
Mole fraction (XA) = (38.89)/(38.89 + 1.11) = .0278 mol
P°A = 23.8 mmHg (vapor pressure of the pure solvent at 25°C)
So, PA = (.0278)(23.8) = .6605 mmHg
http://chemwiki.ucdavis.edu/Physical_Chemistry/Physical_Properties_of_Matter/Solutions/Ideal_Solutions/Changes_In_Vapor_Pressure%2c_Raoult's_Law
63.) In what volume of water must 1 mol of a nonelectrolyte be dissolved if the solution is to have an osmotic pressure of 3.0 atm at 298 K? What exactly is osmotic pressure?
3 = (1/V)(.0821)(298 K)
3 = (1/V)(24.47)
V = 8.16 L
Osmotic pressure is the necessary pressure required to stop osmotic flow (net flow of water) in a solution.
http://chemwiki.ucdavis.edu/Physical_Chemistry/Physical_Properties_of_Matter/Solutions/Colligative_Properties/Osmotic_Pressure
## Q13.87
A soda contains 0.15% of an 80% CO2 solution by mass. How many milligrams of carbon are contained in a 12 oz can of soda? Assume a solution density of 1.00 g/mL and 1 oz = 29.6 mL.
## S13.87
12 oz = 355.2 mL
$\rho = \dfrac{M}{V}$
1 = M/355.2 mL
M = 355.2 grams x.80 = 284.16 grams CO2
0.15% = grams of C/284.16 grams CO2 x100
= 0.426 grams = 462 miligrams of carbon
http://chemwiki.ucdavis.edu/Analytical_Chemistry/Quantifying_Nature/Density_and_Percent_Compositions
1. Which of the following is to be expected to be the most soluble in hexane and why? CH3OH, NaI, C5H12
C5H12 ,which is pentane, is non polar just like hexane and components with like properties are more soluble in each other because they are similar. Both are non polar because of the small difference in electronegativity between Carbon (+4) and Hydrogen (+1).
2. Which of the following is moderately soluble both in water and in benzene [C6H6], and why? (a) Phenol, C6H5OH (b) Methane, CH4 (c) Hexane, C6H14 (d) Oxygen, O2
(a) Phenol, C6H5OH is the only polar compound, therefore the only one soluble in water and benzene.
# 3 – Why would Salicyl alchohol dissolve in both water and benzene while most other substances don’t?
Salicyl alcohol contains a ring of benzene and can use its –OH groups to hydrogen bond to water molecules
# 33 – Calculate the amount of solvent in a 0.1M Fe4Cl12 solution with density 3.45 g/mL
solvent amount = (( 1 L soln * (1000ml/1L) * (3.45g/1mol) - (.1molFe4Cl12 * (501.4g/1mol))
= 3399.86 * (1 mol H20/18.02g H20) = 188.7 mol H20
35.) What volume of glucose, C6H12O6 (d=1.54 g/ml), must be added per kilogram of water to produce a solution with 3.23 mol % glucose?
3.23% = (x)(100) x=0.0323 moles
0.0323mol (180.18g/1mol)= 5.819814g glucose D=M/V V=M/D V= 5.819814/1.54 V=3.78ml
#39 – At 30C is 13.2g per 100grams of water. Calculate the molarity.
molarity = (13.2g * (1mol K2SO4 / 174.6g K2SO4))/(100gH20 * (1kg/1000g)) = .756 m
41.) A solution of 32.0 g KNO3 in 11.0 g of water is brought to a temperature of 35°C.
(a) Refer to Figure 13-8 and determine whether the solution is unsaturated or supersaturated at 35°C.
Answer: The solution is unsaturated at 35°C because on the KNO3 curve 32g and 35°C is in the region under the curve.
(b) Approximately what mass of KNO3, in grams, must be added to saturate the solution (if originally unsaturated), or what mass of KNO3 can be crystalized (if originally supersaturated)?
11.0g H2O x (53g KNO3/100g H2O)= 5.83g
32g-5.83g=26.17g
43.) Under an N2 (g) pressure of 1.0atm, 21.41 ml of N2(g) dissolves in 3.00L H2O at 0°C. What will be the molarity of N2 in the saturated solution at 0°C when the N2 pressure is 5.83 atm? (Assume that the solution volume remains at 3.00L).
Molarity= [0.0214L N2 x (1 mol N2/22.4L N2 (STP))]/1L soln = 9.558 x 10-4 M N2
K=C/Pgas =9.558 x 10-4 M N2/1.00atm
C=k x Pgas = (9.558 x 10-4 M N2/1.00atm) x 5.83atm= 0.005572 M N2
45. If the solubility of H2 gas is .015 g / atm • kg H2O, calculate the mass of H2 gas that dissolves in a vessel containing hydrogen gas at 4.3 atm and 254 kg of water.
Mass H2 dissolves = (4.3 atm)(254 kg water)(.015 g / atm • kg water) = 16.4 g H2
47. a) A closed container contains of 5.40 L of liquid water and Ne gas at 2.45 atm. The quantity of neon gas dissolved in the water is equivalent to 28.9 grams of Ne. Calculate the solubility of Ne in water in moles Ne / kg H2O • atm).
mol = (28.9 g Ne)(1 mol / 20.18 g) = 1.43 mol Ne
mass water = (5.40 L water)(1000 mL / 1 L)(1 mL / 1 cm^3)(1 g / 1 cm^3)(1 kg / 1000 g) =
=5.40 kg H2O
solubility Ne = (1.43 mol Ne) / (5.40 kg water)(2.45 atm) = 0.108 mol / kg H2O • atm
b) What is the molarity of the dissolved neon?
Molarity = mol solute / Liter solvent = (1.43 mol Ne) / (5.40 L water) = 0.265 M
49. 4.52 g of an unknown compound reduces the freezing point of 65.14 g of ethanol (Kf = 1.99 K • kg / mol) from -114 °C to -145 °C. What is the molar mass of this substance?
∆Tf = -31 °C = -31 K ∆Tf = -iKfm
-31 K = -2(1.99)(mol / .06514 kg) mol = .507 mol substance
molar mass = 4.52 g / .507 mol = 8.91 g / mol
51) if the partial pressure for gas A is 44.11 mmHg and for gas B is 22.18 mmHg what is the total pressure of a mixture of 1 mole of each?
Ptot = P1 + P2… 44.11 mmHg + 22.18 mmHg = 66.29 mmHg
53) Calculate the vapor pressure at 25C° of solution containing 180g of C6H12O6, in 700g H2O). The vapor pressure of the water at 25C is 24.8 mmHg.
First solve for mole fraction of water, 200g C6H12O6 is 1 mole, 700 grams water is 38.8 moles. (Moles water)/total moles = 38.8/39.8 = .974
P = Xwater x Pwater = 24.8 mmHg * (.974) = 24.15 mmHg
57: Assume that the vapor pressure of water is 0.40 atm at 25 degrees Celsius. What is the vapor pressure of a solution of 100 grams water and 50 grams of C6H12O6? Use Raoult's law.
PH2O = XH2O * P*H20
XH2O = (100 g H2O * 1mol/18g) / ((100 g H2O * 1mol/18g) + (50 g C6H12O6*1mol/180g)) = 0.95
PH2O = XH2O * P*H20 = 0.95 * 0.4 atm = 0.38 atm
# 59 – A 0.89g sample is dissolved in 250mL of solvent at 25C with osmotic pressure 2.39mmHg. What is the molar mass of the sample?
(n/V) = (p/RT) = (2.39mmHg * (1atm/760mmHg)/(0.08206 L atm mol^-1 K^-1 * 298.2K = 1.29*10^-4 M
solute amount = 0.25L * (1.29*10^-4)/(1L) = 3.21*10^-5 mol M = (.89g)/(3.21*10^-5) = 2.8*10^4g/mol
61.) What is the process of osmosis?
Osmosis is the process where water travels from a high concentration to a low concentration through a semi-permeable membrane.
63. How many moles of a nonelectrolyte must be dissolved in 2.30 L of water to form a solution that has an osmotic pressure of 4.30 atm at 298 K?
p = MRT 4.30 = (mol / 2.30 L)(.08206)(298 K)
mol = .404 mol
65) What are the factors that contribute to osmotic pressure?
Van’t Hoff factor = i
Temperature = T (in kelvin)
Molarity = M
Universal gas constant = R
75. The molecular formula of thiopene is C4H4S. How many grams of CO2, H2O, and SO2 are produced when 0.867 grams of thiopene are combusted?
MW thiopene = 4*12.01 + 4*1.008 + 32.065 = 84.137
Combustion of thiopene:
C4H4S + 6O2 -> 4CO2 + 2H2O + SO2
-> moles thiopene = 0.867 [gms] / 84.137 [gms/mole] = 0.0103 moles
-> gms CO2 = 4 * 0.0103 moles * 44 gms/mole = 0.453 gms
-> gms H2O = 2 * 0.0103 moles * 18 gms/mole = 0.185 gms
-> gms SO2 = 1 * 0.0103 moles * 32 gms/mole = 0.329 gms
# 77 – How much NaOH is required to change the boiling point of water by 5C?
m = (delta T)/(i * K) = (5C)/(2.00 * 0.512C/m) = 4.9m
solute mass = 1L H2O * (1kg H20/1L H2O) * (2mol NaCl/1kg H2O) * (58.4g NaCl/mol NaCl) = 286.16g NaCl
83.) How can you create a solution that is a good conductor of electric current?
Answer: You can create a solution that is a good conductor by mixing two weak conductors, one a weak acid and one a weak base, that form a salt solution and water. By having a weak acid and a weak base it means that there are fewer ions present. When the two solutions are combined, ions are present, which means that the conductivity increases which means that a strong electrolyte forms.
87. A 16 oz bottle of beer is 4.5% alcohol by volume. Calculate the mass of this ethanol alcohol if it has a density of .789 g/mL. (1 oz = 29.6 mL).
mL = (16 oz)(29.6 mL / 1 oz) = 473.6 mL
volume ethanol = (473.6 mL)(.045) = 21 mL
mass ethanol = (21 mL)(.789 g / mL) = 16.8 g ethanol
## Q13.88
An aqueous solution has 1 M KOH. Use 200 ml of this solution to prepare .250 M KOH what mass of which component, KOH or H2O would you add to the 200 ml of solution.
## S13.88
You must add H2O to lower molar concentration. .2 moles exist in the solution, therefore
0.2/ 200 ml + x ml H2O = 0.25 M
X = 600 ml H2O
# 113 – A solution of KI contains 256g KI per every 100g water. What is the percent mass of the KI?
%KI = (256g KI) / (256g KI + 100g H20) * 100% = 71.9% = 71.9g KI/ 100g solution
## Q13.117
Define or explain the following terms (a) conductivity (b) osomotic pressure (c) Molarity (d) supersaturated solution (e) molality
## S13.117
1. Saturated solution: It is when the quantity of dissolved solute stays constant with time.
2. Osmotic Pressure: It is the necessary pressure to stop the osmotic flow of a solution. The equation for solving osmotic pressures of dilute solutions of nonelectrolytes is p=M X RT.
3. Molarity: It is the conversion factor which relates amount of solute to the volume of solution.
4. Supersaturated Solution: It is when the amount of solute is greater than the quantity in a saturated solution and supersaturated solutions are unstable.
5. Molality: It is the amount of solute (moles) divided by the mass of solvent (in Kg)
## Q13.103
Why not use 100% pure ethylene glycol as an antifreeze:
## S13.103
Because the freeze point is at a minimum at approximately 50% ethylene glycol / water therefore the pure ethylene glycol will not make a suitable antifreeze.
Arrange in order of most soluble to least. Explain why you have arranged them in order.
Na_2 CO_3 (s), C_2 H_4 (g), CH_3 (l), CaCl_2 (s)
CaCl_2 (s) the most soluble
Na_2 CO_3 (s), only slightly soluble because of the carbonate
C_2 H_4 (g), CH_3 (l), nonpolar molecules are insoluble in water.
Some substances are only soluble in water. Some substances are only soluble in acid. Which of the following is soluble in acid and water?
Al(ClO_3 )_3, Al(OH)_3, Al_2 SiO_5, Al_2 O_3
Al(OH)_3 soluble in acids, insoluble in water
Al_2 O_3 slightly soluble in acids, insoluble in water
Al_2 SiO_5 insoluble in acids and water
Al(ClO_3 )_3 soluble in water
Which of the following would be soluble in HCl but not in water?
Ag_3 PO_(4,), AgBr, AgI, AgClO_3
Ag_3 PO_(4,) soluble in acid, insoluble in water
AgBr slightly in HCl, insoluble in water
AgI, insoluble
AgClO_3 soluble in water
33. The density of a solution of 55.5g CaCl_2 (MM=110.98 g/mol) and 500mL of water is 1.19 g/mL. Calculate the mole fraction of water.
55.5g CaCl_2 ((1 mol CaCl_2)/(110.98gCaCl_2 ))=.5 mol
500g H_2 O (K/(10^3 ))=.5 kg
=(.5 mol CaCl_2)/(.5 kg H_2 O)=1m
500g H_2 O+55.5g CaCl_2=55.5 g solution
555.5g solution ((1×〖10〗^(-3) L)/(1.19 g solution))=(.5 mol CaCl_2)/(.4668 L solution)=1.07M
X_(H_2 O)=n_(H_2 O)/(n_(H_2 O)+n_(Ca^(2+) )+ n_(Cl^- ) )
X_(H_2 O)=4.5/(4.5+.5+1)=.7509
35. The density of Ethylene glycol (C2H6O2) is 1.11g/mL. To produce a solution with 3.0 mol % C2H6O2, what volume must be added per kg of water?
n_water=1000 g H_2 O × (1 mol H_2 O)/(18.02 g H_2 O)=55.49 mol H_2 O
n_( C_2 H_6 O_2 )=3.0%=0.030 mol C_2 H_6 O_2
X_( C_2 H_6 O_2 )=0.030=n_( C_2 H_6 O_2 )/(n_( C_2 H_6 O_2 )+55.49)
n_( C_2 H_6 O_2 )=0.030n_( C_2 H_6 O_2 )+1.6647
n_( C_2 H_6 O_2 )=1.6647/((1-0.030))=1.716 mol C_2 H_6 O_2
1.716 mol C_2 H_6 O_2×(62.07 g C_2 H_6 O_2)/(1 mol〖 C〗_2 H_6 O_2 )×(1 mL)/(1.11 g)=95.96 mL C_2 H_6 O_2
41) A solution of 30.0 grams K_2 〖SO〗_4 in 400 grams of water is brought to a temperature of 20°C given that at 20°C, a saturated K_2 〖SO〗_4 solution has a concentration of about 12 grams K_2 〖SO〗_4 dissolved in 100 grams of water.
a) Is the solution unsaturated or supersaturated?
b) Approximately what mass of KNO3, in grams, must be added to the solution (if originally unsaturated) or what mass of KNO3 can be crystallized (if originally supersaturated)
a)
(Mass solute)/(100 g H_2 O)=100 g H_2 O* (30 g K_2 〖SO〗_4 )/(400 g H_2 O) =7.5 grams K_2 〖SO〗_4
→ The solution is thus unsaturated.
b)
(400 grams H_2 O* (12 g〖 K〗_2 〖SO〗_4 )/(100 g H_2 O))- 30.0 g K_2 〖SO〗_4=18 g 〖 K〗_2 〖SO〗_4
45) Assume that the solubility of a natural gas at 20°C and 1 atm gas pressure is .03 g/kg of water. If a sample of natural gas under a pressure of 15 atm is kept in contact with 〖2.00×10〗^3 kg of water, what mass of natural gas will dissolve?
Mass of natural gas = 〖2.00×10〗^3 kg * (.03 g natural gas)/(1 kg H_2 O atm) * 15 atm =
= 〖9×10〗^2 g natural gas
47) The aqueous solubility at 25°C of 〖CO〗_2 (g) at 1 atm is equal to 41.6 mL 〖CO〗_2 (g), measured at standard temperature and pressure, per liter of water. What is the molarity of 〖CO〗_2 (g) in water that is saturated with air at 25°C and 1 atm? Air contains .039% 〖CO〗_2 by volume.
K_(〖CO〗_2 ) = C/P_(〖CO〗_2 ) = (((41.6 mL 〖CO〗_2)/(1 L solution)* (1 mol 〖CO〗_2)/(22,400 mL at STP))/(1 atm) = .(00186 M)/(1 atm)
Partial pressure 〖CO〗_2 = .000039 atm
C = K_(〖CO〗_2 ) P_(〖CO〗_2 ) = (.00185 M)/(1 atm) * .000039 atm = 47.69 M
49) Explain why the volume of a gaseous solution remains essentially constant as a gas dissolves in a liquid. Also what equation does this help to explain?
In a gaseous state, the molecules in the solution have a very low density. The changes in the concentration of the solution are not directly proportional to the volume of the solution, but rather the number of dissolved gas molecules. The mass of the gas dissolved is proportional to the pressure of the gas itself. This statement helps to explain the equation C=K × P_gas, also known as Henry’s Law.
51) What are the partial and total vapor pressures of a solution obtained by mixing 41.9 g methane, 〖CH〗_4, and 62.3 g ethanol, C_2 H_6 O at 30°C? At 30°C the vapor pressure of 〖CH〗_4 is 51.2 mmHg; the vapor pressure of C_2 H_6 O is 31.8 mmHg.
n_M=41.9 g 〖CH〗_4 * (1 mol 〖CH〗_4 )/(16.04 g 〖CH〗_4 ) = 2.61 mol 〖CH〗_4
n_E=62.3 g C_2 H_6 O * (1 mol C_2 H_6 O )/(46.07 g C_2 H_6 O ) = 1.35 mol C_2 H_6 O
X_M = (2.61 mol 〖CH〗_4)/(2.61+1.35) = .659 mol 〖CH〗_4
X_E = (1.35 mol C_2 H_6 O)/(2.61+1.35) = .341 mol C_2 H_6 O
P_M = 2.61 mol 〖CH〗_4 * (51.2 mmHg)/(1 mol 〖CH〗_4 ) = 133.6 mmHg 〖CH〗_4
P_E = 1.35 mol C_2 H_6 O * (31.8 mmHg C_2 H_6 O)/(1 mol C_2 H_6 O) = 42.93 mmHg C_2 H_6 O
P_total=133.6+42.93=176.53 mmHg
53) Calculate the vapor pressure at 30°C of a solution containing 175 g of the nonvolatile solute, salt, NaCl, in 725 g H_2 O. The vapor pressure of water at 30°C is 31.8 mmHg.
n_salt = 175 g NaCl * (1 mol NaCl)/(58.44 g NaCl) = 2.99 mol NaCl
n_water = 725 g H_2 O * (1 mol H_2 O)/(18.02 g H_2 O) = 40.23 mol H_2 O
x_water = (40.23 mol H_2 O)/((40.23+2.99)) = .931 mol
P_solution = X_water P_water = .931 * 31.8 mmHg = 29.6 mmHg
43) Under an N2(g) pressure of 1.00 atm, 23.54 ml of N2(g) dissolves in 1.00 L H2O at 0 degrees celcius. What will be the molarity of N2 in the saturated solution at 0 degrees celcius when the N2 pressure is 2.00 atm? (solution volume is still 1 L.)
Solution:
To calculate the new molarity we need to find Henry’s law constant for the gas and multiply it by the new pressure.
pV/rt=n
(atm X .02354)/(.08206 L atm mol^(-1) K^(-1) X 273K)= 1.0578X10-3 Mol N2= 1.0578X10-3 M
[N2]= (1.0578X10^-3)/(1.00 atm) X 2.00 atm= 2.1X10-3
55) A mixture is composed of 54% H2O(l) and 46% CO2(l). Determine the vapor pressure ov each liquid when the two portions are separated at 105 degrees celcius. Assume the vapor pressure at 105 degrees celcius of CO2=120 mmHg and the vapor pressure of H2O = 105 mmHg. (these are not the actual values)
Solution: First find the mole fractions of each compound, and then multiply them by their respective pressures.
Assume 100 grams of the solution, so 54 grams of H2O and 46 grams of CO2.
Moles H2O=54 g/18 g/mol=3 mol
Moles CO2=46g/44 g/mol=1.045 mol
Vap. Press. H2O= (3/(1.045+3))(105 mmHg)=77.874 mmHg
Vap. Press. CO2= (1.045/4.045)(120 mmHg)= 31.001 mmHg
57) A solution composed of Element A and Element B have their normal boiling point at 100 degrees celcius. If xA=.400 and the vapor pressure of pure element B at 100 degrees celcius is 320 mmHg, what is the vapor pressure of pure element A at 100 degrees celcius? (assume ideal solution behavior)
Solution:
If xA=.4, xB=.6
Partial vapor pressure of element B= .6X320= 192 mmHg.
Since one assumes ideal solution behavior, the total vapor pressure at the solution’s normal boiling point is 760 mmHg.
So partial pressure of Element A= 760-192= 568 mmHg
568 mmHg=.4(pure pressure Element A)
Vapor pressure of pure element A=1412.5 mmHg
59) A 1.5 gram sample of a uknown element is dissolved in 500 ml of a suitable solvent at 37 degrees celcius. The solution has an osmotic pressure of 2.00 mmHg. What is the molar mass of the uknown element?
Solution:
Simply plug in the values to equation 13.4
N=πV/RT=(2X.5)/.08206X300= .04062 moles
Molar mass of uknown element= 1.5 g/.04062 mol= 36.9276 g/mol
61) When organisms consume large amount of salty food at one time, they tend to get thirsty for water. Why does this occur?
Solution: When one consumes large amounts of salt, it causes a chemical imbalance outside and inside of bodily cells. To dilute the extreme salt concentration outside of the bodily cells, the low salt concentration solution that is present in the cells travels across the semi-permeable membrane to dilute the high salt concentration outside the cells. This lowers the water concentration inside bodily cells, triggering a thirst for water to replenish the cells water supplies.
63) In what volume of water must 5 mol of a nonelectrolyte be dissolved if the solution is suppose to have an osmotic pressure of 2 atm at 390K? Which gas laws resemble this result?
n/V=π/RT=(2.00 atm)/((0.08206 atm)/(mol K)× 390 K)=0.0625 M
Volume=5 mol ×(1 L)/(0.0625 mol solute)=80 L solution ≈80 L solvent
We have assumed that the solution is so dilute that its volume is basically the same volume as the solvent constituting it. This volume corresponds to the STP molar volume of an ideal. The equation for osmotic pressure also closely resembles the ideal gas equation.
65) At 22°C a 0.40 g sample of polypropylene (a polymer used in textiles) in 85.0 mL of benzene solution has an osmotic pressure that supports a 6.6 mm column of solution (d = 0.75 g/mL). What is polypropylene’s molar mass? (For Hg, d = 13.6 g/mL).
Determine the concentration of the solution from the osmotic pressure
π= 6.6 mm soln. ×(0.75 mmHg)/(13.6 mm soln.) × (1 atm)/(760 mmHg)=4.8 ×〖10〗^(-4) atm
n/V= π/RT= (4.8 ×〖10〗^(-4) atm)/((0.08206 L atm)/(mol K) × 295K )=2.0 ×〖10〗^(-5) M
Amount of Solute
=85.0 mL ×(1 L)/(1000 mL) × 2.0 ×〖10〗^(-5) M=1.7 ×〖10〗^(-6) mol solute
Molar Mass
= (0.40 g)/(1.7 × 〖10〗^(-6) mol)=(〖2.4 ×10〗^5 g)/mol
75) Benzene (fp= 5.5; bp = 80.1°C) is a hydrocarbon, thiophene is sometimes used as a solvent in its absence. Combustion of a 2.543 g sample of benzene produces 5.150 g H2O, and 1.699 g of CO2. The freezing point is lowered by 0.893°C, when a 0.782 g sample of benzene is dissolved in 39.72 g of thiophene (C4H4S). What is the molecular formula of benzene? (thiophene; Kf =(4.72°C)/m)
First, determine the molality of the thiophene solution, then the molar mass of the solute.
m=〖∆T〗_f/〖-K〗_f =(-0.893°C)/((-4.72°C)/m)=0.189m
amount of solute=0.03972 kg thiophene×(0.189 mol solute)/(1 kg thiophene)=0.00751 mol solute
molar mass= (0.782 g benzene)/(0.00751 mol benzene)=(104.1 g)/mol
Next, use the masses of the combustion products to determine the empirical formula.
Amount of C= 1.699 g 〖CO〗_2×(1 mol 〖CO〗_2)/(44.010 〖CO〗_2 )×(1 mol C)/(1 mol 〖CO〗_2 )=0.0386 mol C ÷0.0386=1 mol C
Amount of H=5.150g ×(1 mol H_2 O)/(18.015 H_2 O)×(2 mol H)/(1 mol H_2 O)=0.572 mol H÷0.0386=14 mol H
This gives us a molecular formula of CH14, which for benzene is incorrect. Although the math was correct, the estimates given on how much H2O and CO2 benzene produces due to combustion were wrong.
## Q13.77
Many people when boiling water add salt, believing that it helps raise the boiling point, shortening the cooking process. About how many grams of NaCl would you need to add to 5.0 L of water at 1 atm pressure to raise the boiling point by 5°C? Is this more than a person would typically add to their cooking water?
m=(∆T_b)/(i×K_b )=(5°C)/(2.00 × (0.512°C)/m)=4.88 m
solute mass=5.0 L H_2 O × (1 kg〖 H〗_2 O )/(1 L H_2 O) × (2 mol NaCl)/(1 kg H_2 O) × (58.4 g NaCl)/(1 mol NaCl)=1460g NaCl
This is at least two hundred times the amount of a salt a person would normally when adding salt for cooking purposes.
## Q13.83
HOC6H5(aq) is a poor conductor of electricity. The same is to be said about ammonia, NHs(aq). However, when mixed, these resulting solutions conduct electricity very well. Offer an explanation.
The combination of HOC6H5(aq) with NHs(aq), results in the formation of NH4OC6H5(aq), which is a solution of NH4+ and C6H5O- ions. 〖NH〗_3 (aq)+ 〖HC〗_2 H_3 O_2 (aq)→〖NH〗_4 〖OC〗_6 H_5 (aq)→〖〖NH〗_4〗^+ (aq)+〖C_2 H_3 O_2〗^- (aq). This solution of ions or strong electrolytes conduct electricity very well.
## Q13.87
A typical can of soda may contain 0.11% of an 85% H_3 PO_4 solution by mass. How many miligrams of phosphorus are contained in a half-liter bottle (about 16.9 ounces) of soda? You may assume the solution density to be 1.00 g/ml and that 1 ounce = 29.6 ml.
Start by solving for the mass of the solution
Density = mass⁄volume Mass = Density * Volume
Mass of solution (soda)= 1 g⁄ml × (16.9 ounces × (29.6 ml)/(1 ounce))=500.24 mass soda
Of the total solution,0.11% of the mass is the H_3 PO_4 solution.
500.24g × 0.0011=0.55 grams H_3 PO_4 solution.
In the H_3 PO_4 solution,85% is H_3 PO_4
0.55 g H_3 PO_4 solution × 0.85=0.468 g H_3 PO_4
For every 98 grams of H_3 PO_4,there are 30.97 grams of P
0.468g H_3 PO_4×(30.97 g P)/(98g H_3 PO_4 ) × (1,000 mg)/(1 gram)=147.9 mg of P per half liter of soda
## Q13.88
A solution has 120.8 g of NaOH/Liter solution, with a solution density of 1.21. Use 50 ml of this solution to prepare 0.1 M NaOH. What mass of which component, NaOH or H_2O would you add to the 50ml solution?
## S13.88
First, solve for the molarity of the original solution
NaOH molatiry
$(120.8 g NaOH × ((1 mol NaOH)/(40 g NaOH)))/(1 L of solution)=3.02 M NaOH$
This solution is MORE concentrated than the desired 0.1 M solution. Thus, this solution needs to be diluted. Determine the mass of water produced in the final solution, and the mas of water in the original solution, and finally the mass of water that needs to be added to dilute the solution.
Mass of H_2 O in final solution=0.05 L original solution × (3.02 M NaOH)/(1 L solution) × (1 kg H_2 O)/(0.1 M NaOH)=1.51 kg H_2 O
Mass of the original solution=50 ml ×(1.21 g)/(1 ml)=60.5 grams original solution
Mass of NaOH=50 ml × (1 L)/(1000 ml) × (60.5 g NaOH)/(1 L solution)=3.025 g NaOH
Original mass of water = 60.5g solution-3.025g NaOH=57.5g H_2 O
Mass added of H_2 O=1510g H_2 O-57.5g H_2 O=1452.5g H_2 | 2020-04-05 01:34:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5835254788398743, "perplexity": 9116.865597822563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370526982.53/warc/CC-MAIN-20200404231315-20200405021315-00461.warc.gz"} |
https://bitbucket.org/gsakkis/pylint | # pylint
Bitbucket is a code hosting site with unlimited public and private repositories. We're also free for small teams!
Close
# README for Pylint - http://www.pylint.org/
Pylint is a Python source code analyzer which looks for programming errors, helps enforcing a coding standard and sniffs for some code smells (as defined in Martin Fowler's Refactoring book).
Pylint has many rules enabled by default, way too much to silence them all on a minimally sized program. It's highly configurable and handle pragmas to control it from within your code. Additionally, it is possible to write plugins to add your own checks.
Development is hosted on bitbucket: https://bitbucket.org/logilab/pylint/
You can use the code-quality@python.org mailing list to discuss about Pylint. Subscribe at http://lists.python.org/mailman/listinfo/code-quality or read the archives at http://lists.python.org/pipermail/code-quality/
## Install
Pylint requires the astroid (the later the better; formerly known as logilab-astng) and logilab-common (version >= 0.53) packages.
From the source distribution, extract the tarball and run
python setup.py install
You'll have to install dependencies in a similar way. For debian and rpm packages, use your usual tools according to your Linux distribution.
More information about installation and available distribution format may be found in the user manual in the doc subdirectory.
## Documentation
Look in the doc/ subdirectory or at http://docs.pylint.org
Pylint is shipped with following additional commands:
• pyreverse: an UML diagram generator
• symilar: an independent similarities checker
• epylint: Emacs and Flymake compatible Pylint
• pylint-gui: a graphical interface
# Recent activity
George Sakkis pushed 45 commits to gsakkis/pylint
46b4f2f - Move all source under a pylint/ directory so that setuptools.develop works
50a1da7 - [design analysis] fix badly implemented protocol for read-only containers like tuple. Close #25
443e97c - Do not emit [fixme] for every line if the config value 'notes' is empty, but [fixme] is enabled.
e7aad70 - Emit warnings about lines exceeding the column limit when those lines are inside multiline docstrings.
3e994f1 - Do not double-check parameter names with the regex for parameters and inline variables.
Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js.
Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java.
Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory.
Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml.
Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file.
Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o.
Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o. | 2014-04-19 05:52:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40468692779541016, "perplexity": 14624.378501860514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://orbi.ulg.ac.be/simple-search?query=sort_dateaccessioned%3A+%5B2012-10-01*+TO+9999*%5D&title=Last+7+days&sort_by0=1&order0=DESC&start=1760 | Last 7 days Results 1761-1780 of 43818. 84 85 86 87 88 89 90 91 92 93 94 Recent changes in north-west Greenland climate documented by NEEM shallow ice core data and simulations, and implications for past-temperature reconstructionsMasson-Delmotte, V.; Steen-Larsen, H.; Ortega, P. et alin Cryosphere (The) (2015), 9Combined records of snow accumulation rate, δ18O and deuterium excess were produced from several shallow ice cores and snow pits at NEEM (North Greenland Eemian Ice Drilling), covering the period from ... [more ▼]Combined records of snow accumulation rate, δ18O and deuterium excess were produced from several shallow ice cores and snow pits at NEEM (North Greenland Eemian Ice Drilling), covering the period from 1724 to 2007. They are used to investigate recent climate variability and characterise the isotope–temperature relationship. We find that NEEM records are only weakly affected by inter-annual changes in the North Atlantic Oscillation. Decadal δ18O and accumulation variability is related to North Atlantic sea surface temperature and is enhanced at the beginning of the 19th century. No long-term trend is observed in the accumulation record. By contrast, NEEM δ18O shows multidecadal increasing trends in the late 19th century and since the 1980s. The strongest annual positive δ18O values are recorded at NEEM in 1928 and 2010, while maximum accumulation occurs in 1933. The last decade is the most enriched in δ18O (warmest), while the 11-year periods with the strongest depletion (coldest) are depicted at NEEM in 1815–1825 and 1836–1846, which are also the driest 11-year periods. The NEEM accumulation and δ18O records are strongly correlated with outputs from atmospheric models, nudged to atmospheric reanalyses. Best performance is observed for ERA reanalyses. Gridded temperature reconstructions, instrumental data and model outputs at NEEM are used to estimate the multidecadal accumulation–temperature and δ18O–temperature relationships for the strong warming period in 1979–2007. The accumulation sensitivity to temperature is estimated at 11 ± 2 % °C−1 and the δ18O–temperature slope at 1.1 ± 0.2 ‰ °C−1, about twice as large as previously used to estimate last interglacial temperature change from the bottom part of the NEEM deep ice core. [less ▲]Detailed reference viewed: 33 (1 ULg) Depression in Women and in Men: Differences on Behavioral Avoidance and on Behavioral ActivationWagener, Aurélie ; Baeyens, Céline; Blairy, Sylvie Poster (2015, August 06)Depression is a well-known disorder characterized by e.g. sadness, loss of interest and pleasure, feelings of guilt or worthlessness. Depression is also characterized by a decrease of the level of ... [more ▼]Depression is a well-known disorder characterized by e.g. sadness, loss of interest and pleasure, feelings of guilt or worthlessness. Depression is also characterized by a decrease of the level of engagement in activities also conceptualized as behavioral avoidance. Indeed, depressed patients less and less engage themselves in pleasurable activities (e.g. they spend more and more time in their bed, see their friends more rarely). Reciprocally, this decrease of the level of engagement in activities reinforces and maintains depressive symptoms. This relationship between depression and a low level of engagement in activities is well-established in the scientific literature but no study has, until now, discussed the reasons of this decrease of engagement in activities. According to theoretical models of depression (Beck, 2008; Lewinsohn, 1985; Watkins, 2009), five sets of psychological processes (PP) are involved in depressive symptomatology: negative repetitive thoughts, maladaptive emotion regulation strategies, low environmental rewards, negative self-image and inhibition. We hypothesize that these PP could be considered as explaining factors of the behavioral avoidance. Furthermore, we hypothesize that other PP could be considered as explaining factors of the behavioral activation (adaptive emotion regulation strategies, high environmental rewards, positive self-image, approach and high self-clarity). Then, our aim is to assess the links between behavioral avoidance as well as activation and the PP mentioned above. In order to reach this objective, we developed a model of these links based on the psychological model of mental ill-health of Kinderman (2005, 2013). According to this model, biological, social and circumstantial factors lead to mental disorders through their conjoint effects on psychological processes. Furthermore, because depression is different in women and in men, we assessed the adequacy of our model according to the sex. Clinical and community adults completed an online survey assessing the psychological processes mentioned above, avoidance and activation. Since several questionnaires were used to assess each PP, factorial scores were computed for each one. Preliminary analyses (confirmatory factor analyses) were realized with a sample of 393 women and 139 men. The results revealed differences between men and women. For women, on the one hand, low levels of environmental rewards, maladaptive emotion regulation strategies and negative repetitive thoughts are linked to behavioral avoidance, and on the other hand, high levels of environmental rewards and positive self-image are linked to behavioral activation. For men, on the one hand, negative self-image, maladaptive emotion regulation strategies and low environmental rewards are linked to behavioral avoidance, and, on the other hand, high levels of environmental rewards and positive self-image are linked to behavioral activation. The final results will be presented during the convention, as data-collection is on-going and will end in May 2015. Clinical implications of these results will also be discussed such as the relevance of working on the levels of environmental rewards. [less ▲]Detailed reference viewed: 44 (2 ULg) A principle of similarity for nonlinear vibration absorbersHabib, Giuseppe ; Kerschen, Gaëtan Conference (2015, August 05)With continual interest in expanding the performance envelope of engineering systems, nonlinear components are increasingly utilized in real-world applications. This causes the failure of wellestablished ... [more ▼]With continual interest in expanding the performance envelope of engineering systems, nonlinear components are increasingly utilized in real-world applications. This causes the failure of wellestablished techniques to mitigate resonant vibrations. In particular, this holds for the linear tuned vibration absorber (LTVA), which requires an accurate tuning of its natural frequency to the resonant vibration frequency of interest. This is why the nonlinear tuned vibration absorber (NLTVA), the nonlinear counterpart of the LTVA, has been recently developed. An unconventional aspect of this absorber is that its restoring force is tailored according to the nonlinear restoring force of the primary system. This allows the NLTVA to extend the so-called Den Hartog’s equal-peak rule to the nonlinear range. In this work, a fully analytical procedure, exploiting harmonic balance and perturbation techniques, is developed to define the optimal value of the nonlinear terms of the NLTVA. The developments are such that they can deal with any polynomial nonlinearity in the host structure. Another interesting feature of the NLTVA, discussed in the paper, is that nonlinear terms of different orders do not interact with each other in first approximation, thus they can be treated separately. Numerical results obtained through the shooting method coupled with pseudoarclength continuation validate the analytical developments. [less ▲]Detailed reference viewed: 30 (3 ULg) Alternatives to traditional valorisation ways for brewer’s spent grainsVillani, Nicolas ; Aguedo, Mario ; Richel, Aurore Poster (2015, August 05)Brewer’s Spent Grains (BSG) are a highly available and cheap food supply chain waste (FSCW) that is mainly used in low-valued feed applications. This residue represents around 85 % of the total amount of ... [more ▼]Brewer’s Spent Grains (BSG) are a highly available and cheap food supply chain waste (FSCW) that is mainly used in low-valued feed applications. This residue represents around 85 % of the total amount of waste produced by breweries with an annual tonnage of 3.4 million tons (on a dry basis) in the European Union. Based on its composition, BSG could be valorised in a wide variety of value-added products. For example, cellulose and remaining starch could easily be turned into ethanol or used as solid state fermentation media or as platform molecules for further chemical synthesis. These alternative valorisation ways could lead to an important economic relief through the whole brewery industry. Herein is described a multistep fractionation of BSG into cellulosic pulp, free sugars, proteins, germs and lignin using an Organosolv acidic pretreatment. This extraction procedure has been optimised in order to allow the most efficient and complete valorisation of BSG. [less ▲]Detailed reference viewed: 92 (10 ULg) Evidence of a fine-scale genetic structure for the endangered Pyrenean desman (Galemys pyrenaicus) in the French PyreneesGillet, François ; Cabria Garrido, Maria Teresa; Blanc, Frédéric et alPoster (2015, August 05)Detailed reference viewed: 27 (0 ULg) Jouer avec les mots, pourquoi et comment ?Rigo, Michel Scientific conference (2015, August 04)A l'instar de Raymond Queneau et ses cent mille milliards de poèmes, cet exposé a pour but de compter et de construire des mots aux propriétés parfois surprenantes. Les premiers résultats en combinatoire ... [more ▼]A l'instar de Raymond Queneau et ses cent mille milliards de poèmes, cet exposé a pour but de compter et de construire des mots aux propriétés parfois surprenantes. Les premiers résultats en combinatoire des mots remontent au début du siècle précédent, avec les travaux du mathématicien norvégien Axel Thue. Cette branche des mathématiques étudie la structure et les arrangements apparaissant au sein de suites finies, ou infinies, de symboles appartenant à un ensemble fini. Donnons un exemple rudimentaire. Un carré est la juxtaposition de deux répétitions d'un mot, ainsi "coco" ou "bonbon" sont des carrés. On dira alors qu'un mot comme "taratata" contient un carré. Il est aisé de vérifier que, si on dispose uniquement de deux symboles "a" et "b", alors tout mot de longueur au moins 4 contient un des carrés "aa", "bb", "abab" ou encore "baba". On dira donc que, sur deux symboles, les carrés sont inévitables. Cette observation pose des questions intéressantes et simples à formuler : Avec trois symboles, peut-on construire un mot arbitrairement long ne contenant pas de carré ? Si on se limite à deux symboles, peut-on construire un mot arbitrairement long sans cube, i.e., évitant la juxtaposition de trois répétitions d'un même mot ? En fonction de la taille de l'alphabet, quels motifs doivent nécessairement apparaître et quels sont ceux qui sont évitables ? Que se passe-t-il si on autorise certaines permutations ? etc. Dans cet exposé, on passera en revue quelques constructions simples de mots finis ou infinis : mot de Thue-Morse, mot de Fibonacci, mots Sturmiens. Nous montrerons aussi que les applications sont nombreuses : arithmétique, transcendance en théorie des nombres, informatique mathématique et théorie des automates, pavages du plan, dynamique symbolique et codage de rotations, infographie, géométrie discrète et représentation de segment de droites à l'écran, bio-informatique, ... [less ▲]Detailed reference viewed: 56 (8 ULg) Quadratic reformulations of nonlinear binary optimization problemsAnthony, Martin; Boros, Endre; Crama, Yves et alE-print/Working paper (2015)Very large nonlinear unconstrained binary optimization problems arise in a broad array of applications. Several exact or heuristic techniques have proved quite successful for solving many of these ... [more ▼]Very large nonlinear unconstrained binary optimization problems arise in a broad array of applications. Several exact or heuristic techniques have proved quite successful for solving many of these problems when the objective function is a quadratic polynomial. However, no similarly efficient methods are available for the higher degree case. Since high degree objectives are becoming increasingly important in certain application areas, such as computer vision, various techniques have been recently developed to reduce the general case to the quadratic one, at the cost of increasing the number of variables. In this paper we initiate a systematic study of these quadratization approaches. We provide tight lower and upper bounds on the number of auxiliary variables needed in the worst-case for general objective functions, for bounded-degree functions, and for a restricted class of quadratizations. Our upper bounds are constructive, thus yielding new quadratization procedures. Finally, we completely characterize all minimal'' quadratizations of negative monomials. [less ▲]Detailed reference viewed: 62 (3 ULg) A next-generation approach to assess the cyanobacterial diversity and biogeography in the High Arctic (Svalbard)Laughinghouse, Haywood Dail; Stelmach Pessi, Igor ; Velazquez, David et alPoster (2015, August 03)Polar ecosystems are extremely sensitive to global climate changes and human activities. Cyanobacteria are key photosynthetic organisms in these latitudes, due to their roles in soil aggregation, nitrogen ... [more ▼]Polar ecosystems are extremely sensitive to global climate changes and human activities. Cyanobacteria are key photosynthetic organisms in these latitudes, due to their roles in soil aggregation, nitrogen fixation, carbon cycles, and secondary metabolite production, among others. Previous works indicate that different cyanobacterial taxa/communities have different impacts on the environment, in both biogeochemical cycles and bioactive compound productions. Furthermore, the presence of biogeographical patterns in microorganisms, as found in macroorganisms, is an ongoing debate. In this study, during the 2013 MicroFun expedition, we sampled 72 locations around Svalbard including diverse biotopes such as glacial forefields, tundra soils, hot springs, soil crusts, microbial mats, wet walls, cryoconites, plankton and periphyton, in order to (1) assess the biodiversity of cyanobacteria around Svalbard, (2) verify the existence of biogeographical trends around the archipelago, and (3) compare these data with other polar (cold) areas, especially Antarctica. We used a pyrosequencing approach targeting cyanobacteria-specific 16S rRNA gene sequences to deeply study the cyanobacterial communities. [less ▲]Detailed reference viewed: 52 (0 ULg) Development of cryopreservation methods for long-term preservation of cyanobacterial strains in the BCCM/ULC collectionCrahay, Charlotte ; Renard, Marine ; Mari, Maud et alPoster (2015, August 03)Long-term genetic and functional stability is a fundamental requirement for the maintenance of microorganisms and cryopreservation is the preferred method for the long-term storage of many micro-organisms ... [more ▼]Long-term genetic and functional stability is a fundamental requirement for the maintenance of microorganisms and cryopreservation is the preferred method for the long-term storage of many micro-organisms, including cyanobacteria. The BCCM/ULC collection currently holds 200 cyanobacterial strains, but only 62 are cryo-preserved. The main limiting factors are the low levels of survival of some strains and the long periods required to recover from cryopreservation, and thus the inability to deliver rapidly cryopreserved strains to the user community. The devel-opment of improved cryopreservation protocols is therefore required for the future expansion and valorization of the collection. The BRAIN-be project PRESPHOTO (preservation of photosynthetic micro-algae in the BCCM collections) (www.presphoto.ulg.ac.be) aims to improve the preservation of cyanobacterial and diatoms in the BCCM/ULC and BCCM/DCG collections, respectively. [less ▲]Detailed reference viewed: 30 (2 ULg) The BCCM/ULC collection : a Biological Ressource Center for polar cyanobacteriaWilmotte, Annick ; Renard, Marine ; Lara, Yannick et alPoster (2015, August 03)In this study, during the 2013 MicroFun expedition, we sampled 72 locations around Svalbard including diverse biotopes such as glacial forefields, tundra soils, hot springs, soil crusts, microbial mats ... [more ▼]In this study, during the 2013 MicroFun expedition, we sampled 72 locations around Svalbard including diverse biotopes such as glacial forefields, tundra soils, hot springs, soil crusts, microbial mats, wet walls, cryoconites, plankton and periphyton, in order to (1) assess the biodiversity of cyanobacteria around Svalbard, (2) verify the existence of biogeographical trends around the archipelago, and (3) compare these data with other polar (cold) areas, especially Antarctica. We used a pyrosequencing approach targeting cyanobacteria-specific 16S rRNA gene sequences to deeply study the cyanobacterial communities. [less ▲]Detailed reference viewed: 17 (1 ULg) Genome sequencing of an endemic filamentous Antarctic cyanobacteriumLara, Yannick ; Verlaine, Olivier ; Kleinteich, Julia et alPoster (2015, August 03)The strain Phormidium priestleyi ULC007 was isolated from a benthic mat located in a shallow freshwater pond in the Larsemann Hills (69°S), Western Antarctica. This strain belongs to a cyanobacterial ... [more ▼]The strain Phormidium priestleyi ULC007 was isolated from a benthic mat located in a shallow freshwater pond in the Larsemann Hills (69°S), Western Antarctica. This strain belongs to a cyanobacterial cluster that appeared as potentially endemic (Taton et al. 2006). After obtaining an axenic isolate, we sequenced the genome of this strain in the frame of the BELSPO CCAMBIO project, in order to better understand the functioning, metabolism and adaptative strategies of cyanobacteria to the extreme Antarctic environment. [less ▲]Detailed reference viewed: 30 (4 ULg) Contribution of cyanobacteria to the building of travertines in a calcareous streamWilmotte, Annick ; Golubic, Stjepko; Kleinteich, Julia et alPoster (2015, August 03)The ambient temperature travertine deposits of the calcareous Hoyoux River (Modave, Belgium) and several tributaries are organized and promoted by the filamentous cyanobacterium identified by its ... [more ▼]The ambient temperature travertine deposits of the calcareous Hoyoux River (Modave, Belgium) and several tributaries are organized and promoted by the filamentous cyanobacterium identified by its morphotype and ecological properties as Phormidium cf. incrustatum. A combination of techniques was used to study this biotope: physico-chemical parameters and CO2 measurements, Scanning and Transmission Electron Microscopy, RAMAN microspectroscopy. A molecular diversity study with pyrosequencing of the cyanobacterial 16S rRNA is in progress. A potential candidate was isolated in culture. [less ▲]Detailed reference viewed: 33 (1 ULg) A propos des fonctions continues qui ne sont dérivables en aucun pointEsser, Céline Conference (2015, August 03)En 1872, Karl Weierstrass présenta non seulement une, mais toute une famille de fonctions continues et nulle part dérivables. Après la publication de ce résultat, beaucoup d'autres mathématiciens ... [more ▼]En 1872, Karl Weierstrass présenta non seulement une, mais toute une famille de fonctions continues et nulle part dérivables. Après la publication de ce résultat, beaucoup d'autres mathématiciens apportèrent leur propre contribution en construisant d'autres fonctions continues et nulle part dérivables. Dans cet exposé, nous présenterons les fonctions de Weierstrass et nous montrerons que le théorème de Baire permet d'affirmer que l'ensemble des fonctions nulle part dérivables est dense dans l'ensemble des fonctions continues. Nous étudierons également la régularité ponctuelle des fonctions de Weierstrass en introduisant la notion d'exposant de Hölder. [less ▲]Detailed reference viewed: 30 (6 ULg) Time series of high-resolution spectra of SN 2014J observed with the TIGRE telescopeJack, D.; Mittag, M.; Schröder, K.-P. et alin Monthly Notices of the Royal Astronomical Society (2015), 451We present a time series of high-resolution spectra of the Type Ia supernova 2014J, which exploded in the nearby galaxy M82. The spectra were obtained with the HEROS échelle spectrograph installed at the ... [more ▼]We present a time series of high-resolution spectra of the Type Ia supernova 2014J, which exploded in the nearby galaxy M82. The spectra were obtained with the HEROS échelle spectrograph installed at the 1.2-m TIGRE telescope. We present a series of 33 spectra with a resolution of R ≈ 20 000, which covers the important bright phases in the evolution of SN 2014J during the period from 2014 January 24 to April 1. The spectral evolution of SN 2014J is derived empirically. The expansion velocities of the Si II P-Cygni features were measured and show the expected decreasing behaviour, beginning with a high velocity of 14 000 km s[SUP]-1[/SUP] on January 24. The Ca II infrared triplet feature shows a high-velocity component with expansion velocities of >20 000 km s[SUP]-1[/SUP] during the early evolution apart from the normal component showing similar velocities as Si II. Further broad P-Cygni profiles are exhibited by the principal lines of Ca II, Mg II and Fe II. The TIGRE SN 2014J spectra also resolve several very sharp Na I D doublet absorption components. Our analysis suggests interesting substructures in the interstellar medium of the host galaxy M82, as well as in our Milky Way, confirming other work on this SN. We were able to identify the interstellar absorption of M82 in the lines of Ca II H & K at 3933 and 3968 Å as well as K I at 7664 and 7698 Å. Furthermore, we confirm several diffuse interstellar bands, at wavelengths of 6196, 6283, 6376, 6379and 6613 Å and give their measured equivalent widths. [less ▲]Detailed reference viewed: 15 (0 ULg) A Coordinated X-Ray and Optical Campaign of the Nearest Massive Eclipsing Binary, δ Orionis Aa. II. X-Ray VariabilityNichols, J.; Huenemoerder, D. P.; Corcoran, M. F. et alin Astrophysical Journal (2015), 809We present time-resolved and phase-resolved variability studies of an extensive X-ray high-resolution spectral data set of the δ Ori Aa binary system. The four observations, obtained with Chandra ACIS ... [more ▼]We present time-resolved and phase-resolved variability studies of an extensive X-ray high-resolution spectral data set of the δ Ori Aa binary system. The four observations, obtained with Chandra ACIS HETGS, have a total exposure time of ≈ 479 ks and provide nearly complete binary phase coverage. Variability of the total X-ray flux in the range of 5–25 Å is confirmed, with a maximum amplitude of about ±15% within a single ≈ 125 ks observation. Periods of 4.76 and 2.04 days are found in the total X-ray flux, as well as an apparent overall increase in the flux level throughout the nine-day observational campaign. Using 40 ks contiguous spectra derived from the original observations, we investigate the variability of emission line parameters and ratios. Several emission lines are shown to be variable, including S xv, Si xiii, and Ne ix. For the first time, variations of the X-ray emission line widths as a function of the binary phase are found in a binary system, with the smallest widths at ϕ = 0.0 when the secondary δ Ori Aa2 is at the inferior conjunction. Using 3D hydrodynamic modeling of the interacting winds, we relate the emission line width variability to the presence of a wind cavity created by a wind–wind collision, which is effectively void of embedded wind shocks and is carved out of the X-ray-producing primary wind, thus producing phase-locked X-ray variability. Based on data from the Chandra X-ray Observatory and the MOST satellite, a Canadian Space Agency mission, jointly operated by Dynacon Inc., the University of Toronto Institute of Aerospace Studies, and the University of British Columbia, with the assistance of the University of Vienna. [less ▲]Detailed reference viewed: 24 (1 ULg) A Coordinated X-Ray and Optical Campaign of the Nearest Massive Eclipsing Binary, δ Orionis Aa. IV. A Multiwavelength, Non-LTE Spectroscopic AnalysisShenar, T.; Oskinova, L.; Hamann, W.-R. et alin Astrophysical Journal (2015), 809Eclipsing systems of massive stars allow one to explore the properties of their components in great detail. We perform a multi-wavelength, non-LTE analysis of the three components of the massive multiple ... [more ▼]Eclipsing systems of massive stars allow one to explore the properties of their components in great detail. We perform a multi-wavelength, non-LTE analysis of the three components of the massive multiple system δ Ori A, focusing on the fundamental stellar properties, stellar winds, and X-ray characteristics of the system. The primary’s distance-independent parameters turn out to be characteristic for its spectral type (O9.5 II), but usage of the Hipparcos parallax yields surprisingly low values for the mass, radius, and luminosity. Consistent values follow only if δ Ori lies at about twice the Hipparcos distance, in the vicinity of the σ-Orionis cluster. The primary and tertiary dominate the spectrum and leave the secondary only marginally detectable. We estimate the V-band magnitude difference between primary and secondary to be {{Δ }}V≈ 2\buildrel{{m}}\over{.} 8. The inferred parameters suggest that the secondary is an early B-type dwarf (≈B1 V), while the tertiary is an early B-type subgiant (≈B0 IV). We find evidence for rapid turbulent velocities (∼200 km s[SUP]‑1[/SUP]) and wind inhomogeneities, partially optically thick, in the primary’s wind. The bulk of the X-ray emission likely emerges from the primary’s stellar wind ({log}{L}[SUB]{{X[/SUB]}}/{L}[SUB]{Bol[/SUB]}≈ -6.85), initiating close to the stellar surface at {R}[SUB]0[/SUB]∼ 1.1 {R}[SUB]*[/SUB]. Accounting for clumping, the mass-loss rate of the primary is found to be {log}\dot{M}≈ -6.4 ({M}[SUB]ȯ [/SUB] {{yr}}[SUP]-1[/SUP]), which agrees with hydrodynamic predictions, and provides a consistent picture along the X-ray, UV, optical, and radio spectral domains. [less ▲]Detailed reference viewed: 14 (1 ULg) A Coordinated X-Ray and Optical Campaign of the Nearest Massive Eclipsing Binary, δ Orionis Aa. I. Overview of the X-Ray SpectrumCorcoran, M. F.; Nichols, J. S.; Pablo, H. et alin The Astrophysical Journal (2015), 809We present an overview of four deep phase-constrained Chandra HETGS X-ray observations of δ Ori A. Delta Ori A is actually a triple system that includes the nearest massive eclipsing spectroscopic binary ... [more ▼]We present an overview of four deep phase-constrained Chandra HETGS X-ray observations of δ Ori A. Delta Ori A is actually a triple system that includes the nearest massive eclipsing spectroscopic binary, δ Ori Aa, the only such object that can be observed with little phase-smearing with the Chandra gratings. Since the fainter star, δ Ori Aa2, has a much lower X-ray luminosity than the brighter primary (δ Ori Aa1), δ Ori Aa provides a unique system with which to test the spatial distribution of the X-ray emitting gas around δ Ori Aa1 via occultation by the photosphere of, and wind cavity around, the X-ray dark secondary. Here we discuss the X-ray spectrum and X-ray line profiles for the combined observation, having an exposure time of nearly 500 ks and covering nearly the entire binary orbit. The companion papers discuss the X-ray variability seen in the Chandra spectra, present new space-based photometry and ground-based radial velocities obtained simultaneously with the X-ray data to better constrain the system parameters, and model the effects of X-rays on the optical and UV spectra. We find that the X-ray emission is dominated by embedded wind shock emission from star Aa1, with little contribution from the tertiary star Ab or the shocked gas produced by the collision of the wind of Aa1 against the surface of Aa2. We find a similar temperature distribution to previous X-ray spectrum analyses. We also show that the line half-widths are about 0.3‑0.5 times the terminal velocity of the wind of star Aa1. We find a strong anti-correlation between line widths and the line excitation energy, which suggests that longer-wavelength, lower-temperature lines form farther out in the wind. Our analysis also indicates that the ratio of the intensities of the strong and weak lines of Fe xvii and Ne x are inconsistent with model predictions, which may be an effect of resonance scattering. [less ▲]Detailed reference viewed: 9 (1 ULg) A Coordinated X-Ray and Optical Campaign of the Nearest Massive Eclipsing Binary, δ Orionis Aa. III. Analysis of Optical Photometric (MOST) and Spectroscopic (Ground-based) VariationsPablo, Herbert; Richardson, Noel D.; Moffat, Anthony F. J. et alin Astrophysical Journa (2015), 809We report on both high-precision photometry from the Microvariability and Oscillations of Stars (MOST) space telescope and ground-based spectroscopy of the triple system δ Ori A, consisting of a binary O9 ... [more ▼]We report on both high-precision photometry from the Microvariability and Oscillations of Stars (MOST) space telescope and ground-based spectroscopy of the triple system δ Ori A, consisting of a binary O9.5II+early-B (Aa1 and Aa2) with P = 5.7 days, and a more distant tertiary (O9 IV P\gt 400 years). This data was collected in concert with X-ray spectroscopy from the Chandra X-ray Observatory. Thanks to continuous coverage for three weeks, the MOST light curve reveals clear eclipses between Aa1 and Aa2 for the first time in non-phased data. From the spectroscopy, we have a well-constrained radial velocity (RV) curve of Aa1. While we are unable to recover RV variations of the secondary star, we are able to constrain several fundamental parameters of this system and determine an approximate mass of the primary using apsidal motion. We also detected second order modulations at 12 separate frequencies with spacings indicative of tidally influenced oscillations. These spacings have never been seen in a massive binary, making this system one of only a handful of such binaries that show evidence for tidally induced pulsations. [less ▲]Detailed reference viewed: 15 (1 ULg) A force sensor based on three weakly coupled resonators with ultrahigh sensitivityZhao, Chun; Wood, Graham; Xie, J.B. et alin Sensors and Actuators. A, Physical (2015), 232A proof-of-concept force sensor based on three degree-of-freedom (DoF) weakly coupled resonatorswas fabricated using a silicon-on-insulator (SOI) process and electrically tested in 20 Torr vacuum.Compared ... [more ▼]A proof-of-concept force sensor based on three degree-of-freedom (DoF) weakly coupled resonatorswas fabricated using a silicon-on-insulator (SOI) process and electrically tested in 20 Torr vacuum.Compared to the conventional single resonator force sensor with frequency shift as output, by measuringthe amplitude ratio of two of the three resonators, the measured force sensitivity of the 3DoF sensor was4.9 × 106/N, which was improved by two orders magnitude. A bias stiffness perturbation was applied toavoid mode aliasing effect and improve the linearity of the sensor. The noise floor of the amplitude ratiooutput of the sensor was theoretically analyzed for the first time, using the transfer function model ofthe 3DoF weakly coupled resonator system. It was shown based on measurement results that the outputnoise was mainly due to the thermal–electrical noise of the interface electronics. The output noise spectraldensity was measured, and agreed well with theoretical estimations. The noise floor of the force sensoroutput was estimated to be approximately 1.39nN for an assumed 10 Hz bandwidth of the output signal,resulting in a dynamic range of 74.8 dB. [less ▲]Detailed reference viewed: 25 (2 ULg) What effects do rater bias and assessment method haveon disease severity estimation with regard to hypothesis testing?Chiang, Kuo-Szu; Bock, Clive; El Jarroudi, Moussa et alin Plant Pathology (2015)Detailed reference viewed: 15 (1 ULg) | 2015-11-28 02:25:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.627275824546814, "perplexity": 7546.970694290086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450745.23/warc/CC-MAIN-20151124205410-00080-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://askdev.io/questions/107998/recursive-definition-isomorphism | # Recursive definition isomorphism
If $(X, \lt)$ is a well - getting I can show by transfinite recursion over the ordinals that the function $f(x) = \text{ran} f |_{\hat{x}}$ exists (where $\hat{x} = \{ y : y \lt x\}$).
I have actually gotten $f$ in this manner, Let $V$ be the class of ready and also $F:V \to V$ be a class - function, after that there is an one-of-a-kind $G:ON \to V$ where $ON$ is the class of all ordinals such that $F(\alpha) = F(G|_\alpha)$. So I can use this to get a function $f$ such that $f(x) = F(f|_\hat{x})$. Currently I allow $F = \{(x, \text{ran} x) : x \in V\}$ and afterwards I get the function as above.
Currently, this need to be an isomorphism (order preserving bijection) in between $X$ and also the set of real first sectors of $X$, $I_X$ gotten by incorporation.
Nonetheless, when I have $x < y$, after that I see that $\text{ran} f|_\hat{x} \subset \text{ran} f|_\hat{y}$. So $f(x) \leq f(y)$. Why do I have $f(x) \neq f(y)$?
0
2019-12-02 02:53:01
Source Share
You require to make use of the reality that a well - getting can not be isomorphic to any one of its first sectors.
0
2019-12-03 04:20:02
Source
Okay, it resembles you are considering your $(X,\lt)$ as an ordinal, as opposed to an approximate set.
I assert that for all $y\in X$, if $x\lt y$, after that $f(x)\neq f(y)$ and also $f(x)\subseteq f(y)$. You have actually currently revealed $f(x)\subseteq f(y)$, so we simply require to show the inequality.
If $y=\emptyset$, the least component of $X$, after that there is absolutely nothing to do and also the case holds.
Think the case holds for all $z\lt y$. Allow $x\lt y$. If $x^+$, the follower of $x$, is additionally much less than $y$, after that $f(x)\subseteq f(x^+)\subseteq f(y)$, and also $f(x)\neq f(x^+)$ by the induction theory, so $f(x)\neq f(y)$.
If $y=x^+$, after that $\hat{y} = \hat{x}\cup\{x\}$. So $f(y) = f(x)\cup\{f(x)\}$. If $f(x)\cup\{f(x)\} = f(x)$, after that $f(x)\in f(x)$, which is difficult given that ordinals are well - started about $\in$. Consequently, $f(y)=f(x)\cup\{f(x)\}\neq f(x)$.
By transfinite induction, the case holds for all $y\in X$.
0
2019-12-03 04:18:00
Source | 2022-05-16 10:53:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9514071941375732, "perplexity": 261.72473716427663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510117.12/warc/CC-MAIN-20220516104933-20220516134933-00391.warc.gz"} |
https://cs.stackexchange.com/questions/103171/an-efficient-algorithm-to-find-a-linear-transformation-between-two-ternary-quadr | # An efficient algorithm to find a linear transformation between two ternary quadratic forms
Let $$\mathbb{F}_p$$ be a prime finite field for $$p > 2$$. Consider two ternary quadratic forms $$Q_1\!: x^2 - a_1(t)y^2 - b_1(t)z^2,\\ Q_2\!: x^2 - a_2(t)y^2 - b_2(t)z^2$$ over the field $$\mathbb{F}_p(t)$$ of rational functions with coefficients from $$\mathbb{F}_p$$. For simplicity let $$a_1, a_2, b_1, b_2 \in \mathbb{F}_p[t]$$ are polynomials without multiple roots and $$a_1$$, $$b_1$$ (respectively $$a_2$$, $$b_2$$) have no common roots.
Is there an efficient algorithm to find a linear transformation (over $$\mathbb{F}_p(t)$$) between $$Q_1$$ and $$Q_2$$ if it exists?
There is the theory that relates such forms and quaternion algebras (see, for example, $$\S$$1.4 in Book of Gille, Szamuely - Central Simple Algebras and Galois Cohomology). For example, for any non-zero polynomial $$f \in \mathbb{F}_p[t]$$ and $$p > 2$$ the following quadratic forms are isomorphic: $$Q_1\!: x^2 - y^2 - f(t)z^2,\\ Q_2\!: x^2 - y^2 - z^2$$ This is true, because $$Q_1$$ can be reduced to the quadratic form $$Q_3\!: x^\prime y^\prime-(z^\prime)^2$$ by the transformation $$x := x^\prime+\frac{y^\prime}{4f},\qquad y := x^\prime-\frac{y^\prime}{4f},\qquad z := \frac{z^\prime}{f}.$$ It is well known that any two conics (including $$Q_2$$, $$Q_3$$) over a finite field are isomorphic.
• What do you mean by a linear transformation? Is it $x'=c_0x+c_1$, $y'=c_2x+c_3$, $z'=c_4z+c_5$? If so it seems easy to prove that there can never be any non-trivial linear transformation, since the coefficients of $x,y,z,1$ are zero. – D.W. Jan 21 at 16:51
• I mean a non-degenerate projective transformation: $x := c_1x + c_2y + c_3z$, $y := d_1x + d_2y + d_3z$, $z := e_1x + e_2y + e_3z$, where coefficients from $\mathbb{F}_p(t)$. – Dima Koshelev Jan 21 at 17:15
• Still seems impossible for the same reasons. Can you edit the question to give an example of two such quadratic forms where a linear transformation does exist? – D.W. Jan 21 at 17:18
• I added the comments. – Dima Koshelev Jan 21 at 17:59
• You claim that they are isomorphic in the question; how can you know that, if you can't find such a transformation? I think you should put some more effort into your question first. I suggest trying an example and try proving whether such a transformation exists. You should be able to write down a system of 8 equations on the 9 unknowns $c_1,c_2,c_3,d_1,d_2,d_3,e_1,e_2,e_3$ and then see if any solution exists, and thus whether any such linear transformation exists. I think you should also edit the question to show your definition of "linear transformation" in the question. – D.W. Jan 21 at 23:08 | 2019-07-21 23:58:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7808307409286499, "perplexity": 161.16773528938901}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527396.78/warc/CC-MAIN-20190721225759-20190722011759-00390.warc.gz"} |
http://streamlinecpus.com/mean-square/minimize-mean-square-error.php | Home > Mean Square > Minimize Mean Square Error
# Minimize Mean Square Error
## Contents
Mean Squared Error (MSE) of an Estimator Let $\hat{X}=g(Y)$ be an estimator of the random variable $X$, given that we have observed the random variable $Y$. Part of the variance of $X$ is explained by the variance in $\hat{X}_M$. While these numerical methods have been fruitful, a closed form expression for the MMSE estimator is nevertheless possible if we are willing to make some compromises. As we have seen before, if $X$ and $Y$ are jointly normal random variables with parameters $\mu_X$, $\sigma^2_X$, $\mu_Y$, $\sigma^2_Y$, and $\rho$, then, given $Y=y$, $X$ is normally distributed with \begin{align}%\label{} have a peek here
In other words, x {\displaystyle x} is stationary. Browse other questions tagged linear-algebra statistics machine-learning or ask your own question. ISBN0-387-98502-6. You don't know anything else about $Y$.In this case, the mean squared error for a guess $t,$ averaging over the possible values of $Y,$ is$E(Y - t)^2$.Writing $\mu = E(Y)$, https://www.probabilitycourse.com/chapter9/9_1_5_mean_squared_error_MSE.php
## Minimum Mean Square Error Algorithm
Let the attenuation of sound due to distance at each microphone be a 1 {\displaystyle a_{1}} and a 2 {\displaystyle a_{2}} , which are assumed to be known constants. When x {\displaystyle x} is a scalar variable, the MSE expression simplifies to E { ( x ^ − x ) 2 } {\displaystyle \mathrm ^ 6 \left\{({\hat ^ 5}-x)^ ^ This type of proofs can be done picking some value $m$ and proving that it satisfies the claim, but it does not prove the uniqueness, so one can imagine that there In particular, when C X − 1 = 0 {\displaystyle C_ σ 6^{-1}=0} , corresponding to infinite variance of the apriori information concerning x {\displaystyle x} , the result W =
Sorceries in Combat phase When to stop rolling a dice in a game where 6 loses everything Detecting harmful LaTeX code What are the legal consequences for a tourist who runs What are the legal and ethical implications of "padding" pay with extra hours to compensate for unpaid work? ISBN978-0471181170. Mean Square Estimation Lemma Define the random variable $W=E[\tilde{X}|Y]$.
Adding Views - VS Adds Scaffolding and NuGets Can I stop this homebrewed Lucky Coin ability from being exploited? Minimum Mean Square Error Matlab Create a 5x5 Modulo Grid Sieve of Eratosthenes, Step by Step Players Characters don't meet the fundamental requirements for campaign Etymologically, why do "ser" and "estar" exist? An estimator x ^ ( y ) {\displaystyle {\hat ^ 2}(y)} of x {\displaystyle x} is any function of the measurement y {\displaystyle y} . That is why it is called the minimum mean squared error (MMSE) estimate.
## What does assure that $\sum_{k=1}^n \|x_k - m \|^2$ is minimized?
Note that $\sum_{k=1}^n \|x_k− m\|^2$ is constant because it does not depend of $x_0$ ($x_k$ and $m$ are calculated from $X_0$). Edit 1. Publishing a mathematical research article on research which is already done? Minimum Mean Square Error Prediction Your proof does not prove the uniqueness (maybe because this is "clearly").
Solution Since $X$ and $W$ are independent and normal, $Y$ is also normal. In it, you'll get: The week's top questions and answers Important community announcements Questions that need answers see an example newsletter By subscribing, you agree to the privacy policy and terms Another feature of this estimate is that for m < n, there need be no measurement error. this contact form Hope that clears the confusion. –shaktiman Oct 22 '15 at 3:31 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign
share|cite|improve this answer edited Oct 10 '14 at 20:46 answered Oct 10 '14 at 20:41 Antoine 2,033723 add a comment| Your Answer draft saved draft discarded Sign up or log Also, \begin{align} E[\hat{X}^2_M]=\frac{EY^2}{4}=\frac{1}{2}. \end{align} In the above, we also found $MSE=E[\tilde{X}^2]=\frac{1}{2}$. It is required that the MMSE estimator be unbiased. Bibby, J.; Toutenburg, H. (1977).
The estimation error is $\tilde{X}=X-\hat{X}_M$, so \begin{align} X=\tilde{X}+\hat{X}_M. \end{align} Since $\textrm{Cov}(\tilde{X},\hat{X}_M)=0$, we conclude \begin{align}\label{eq:var-MSE} \textrm{Var}(X)=\textrm{Var}(\hat{X}_M)+\textrm{Var}(\tilde{X}). \hspace{30pt} (9.3) \end{align} The above formula can be interpreted as follows. the dimension of y {\displaystyle y} ) need not be at least as large as the number of unknowns, n, (i.e. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the First, note that \begin{align} E[\tilde{X} \cdot g(Y)|Y]&=g(Y) E[\tilde{X}|Y]\\ &=g(Y) \cdot W=0. \end{align} Next, by the law of iterated expectations, we have \begin{align} E[\tilde{X} \cdot g(Y)]=E\big[E[\tilde{X} \cdot g(Y)|Y]\big]=0. \end{align} We are now
This can happen when y {\displaystyle y} is a wide sense stationary process. Generated Wed, 19 Oct 2016 05:28:57 GMT by s_ac4 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.9/ Connection As a consequence, to find the MMSE estimator, it is sufficient to find the linear MMSE estimator. Fundamentals of Statistical Signal Processing: Estimation Theory.
As we have seen before, if $X$ and $Y$ are jointly normal random variables with parameters $\mu_X$, $\sigma^2_X$, $\mu_Y$, $\sigma^2_Y$, and $\rho$, then, given $Y=y$, $X$ is normally distributed with \begin{align}%\label{} Then, the MSE is given by \begin{align} h(a)&=E[(X-a)^2]\\ &=EX^2-2aEX+a^2. \end{align} This is a quadratic function of $a$, and we can find the minimizing value of $a$ by differentiation: \begin{align} h'(a)=-2EX+2a. \end{align} Is a larger or smaller MSE better?What are the applications of the mean squared error?Is the least square estimator unbiased, if so then is only the variance term responsible for the | 2018-01-17 09:11:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 8, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9746527671813965, "perplexity": 757.0813233810077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886860.29/warc/CC-MAIN-20180117082758-20180117102758-00465.warc.gz"} |
https://math.stackexchange.com/questions/606620/how-to-prove-relation-is-asymmetric-if-it-is-both-anti-symmetric-and-irreflexive | How to prove relation is asymmetric if it is both anti-symmetric and irreflexive
Prove a relation is asymmetric if it is both anti-symmetric and irreflexive (anti-reflexsive).
I tried to go from the definitions of the relations:
Anti symmetric: $\forall x,y \, (xRy \land yRx \Rightarrow x=y )$
Irreflexsive: $\forall x\in A \ ,((x,x)\notin R)$
Assymetric: $\forall x,y \in A \,(xRy \Rightarrow \lnot yRx )$
But it doesn't get me anywhere... I also tried to think about proof by contraposition but I can't seem to be able to connect the definitions.
Any help would be appreciated.
Proof by contradiction will work here.
Assume $R$ is antisymmetric and irreflexive:
• Let $R$ be irreflexive: $$\forall x \in A, (x, x)\notin A$$ which means alternatively, $$\forall x \in A, \lnot( xRx)$$
• Let $R$ be antisymmetric: $$\forall x \in A, \forall y \in A, \Big(x R y \land yRx \rightarrow (x = y)\Big)$$
And assume, for contradiction, that $R$ is not asymmetric. The negation of asymmetry is given by $$\exists x \in A, \exists y \in A\,\Big(x R y \land yRx\Big)$$
Now show that this assumption contradicts antisymmetry or irreflexivity:
Can you see that this last assumption implies, by the definition of antisymmetry, that $x = y$?
But if $x = y$, then $xRy \implies xRx$.
But this contradicts irreflexivity! Contradiction. $\square$
• I guess I didn't know how to negate the statement. Thank you. – GinKin Dec 14 '13 at 16:00
• You're welcome, GinKin! – Namaste Dec 14 '13 at 16:01
• @amWhy: Nice feedback +1 – Amzoti Dec 15 '13 at 0:16 | 2019-09-21 16:00:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934004902839661, "perplexity": 253.15848715358254}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574532.44/warc/CC-MAIN-20190921145904-20190921171904-00356.warc.gz"} |
https://www.springerprofessional.de/unconventional-computation-and-natural-computation/3960956 | scroll identifier for mobile
main-content
## Über dieses Buch
This book constitutes the thoroughly refereed post-conference proceedings of the 11th International Conference on Unconventional Computation, UC 2012, held in Orléans, France, during September 3-7, 2012. The 28 revised full papers presented were carefully selected from numerous submissions. Conference papers are organized in 4 technical sessions, covering topics of hypercomputation, chaos and dynamical systems based computing, granular, fuzzy and rough computing, mechanical computing, cellular, evolutionary, molecular, neural, and quantum computing, membrane computing, amorphous computing, swarm intelligence; artificial immune systems, physics of computation, chemical computation, evolving hardware, the computational nature of self-assembly, developmental processes, bacterial communication, and brain processes
## Inhaltsverzeichnis
### The Holy Grail: Finding the Genetic Bases of Phenotypic Characters
A main goal in human genomics is to compare the genetic sequences of different individuals to identify chromosomal regions where genetic variants are shared. Using this information, researchers will be able to discover how genetic differences impact on the expression of different phenotypic characters such as disease susceptibility or drug resistance. One of the main sources of genetic variation is represented by Single Nucleotide Polymorphisms (SNPs) possessed by individuals in a population and compiled into haplotypes. Haplotypes allow to highlight the combined effect of multiple SNPs on the phenotypic character and greatly increase the significance of the predicted associations. Since each person possesses two haplotypes for most regions of the genome but they cannot be directly extracted by common wet-lab experiments, the inference of haplotype pairs from “raw” genetic data (genotypes) is a key computational problem in this area.
Paola Bonizzoni
### Inductive Complexity of P versus NP Problem
Extended Abstract
Using the complexity measure developed in [7,3,4] and the extensions obtained by using inductive register machines of various orders in [1,2], we determine an upper bound on the inductive complexity of second order of the P versus NP problem. From this point of view, the P versus NP problem is more complex than the Riemann hypothesis.
Cristian S. Calude, Elena Calude, Melissa S. Queen
Generally, phenomena of spontaneous pattern formation are random and repetitive, whereas elaborate devices are the deterministic product of human design. Yet, biological organisms and collective insect constructions are exceptional examples of complex systems that are both self-organized and architectured. Can we understand their precise self-formation capabilities and integrate them with technological planning? Can physical systems be endowed with information, or informational systems be embedded in physics, to create autonomous morphologies and functions? A new field of research, Morphogenetic Engineering, was established [1] to explore the modeling and implementation of “self-architecturing” systems. Particular emphasis is set on the programmability and computational abilities of self-organization, properties that are often underappreciated in complex systems science—while, conversely, the benefits of self-organization are often underappreciated in engineering methodologies.
René Doursat
### Reasoning As Though
It is sometimes useful to know that we can safely reason as though something were true, even when it almost certainly is not. This talk will survey instances of this phenomenon in computer science and molecular programming.
Jack H. Lutz
### Universality and the Halting Problem for Cellular Automata in Hyperbolic Spaces: The Side of the Halting Problem
In this paper, we remind results on universality for cellular automata in hyperbolic spaces, mainly results about weak universality, and we deal with the halting problem in the same settings. This latter problem is very close to that of strong universality. The paper focuses on the halting problem and it can be seen as a preliminary approach to strong universality about cellular automata in hyperbolic spaces.
Maurice Margenstern
### An Introduction to Tile-Based Self-assembly
In this tutorial, we give a brief introduction to the field of tile-based algorithmic self-assembly. We begin with a description of Winfree’s abstract Tile Assembly Model (aTAM) and a few basic exercises in designing tile assembly systems. We then survey a series of results in the aTAM. Next, we introduce the more experimentally realistic kinetic Tile Assembly Model (kTAM) and provide an exercise in error correction within the kTAM, then an overview of kTAM results. We next introduce the 2-Handed Assembly Model (2HAM), which allows entire assemblies to combine with each other in pairs, along with an exercise in developing a 2HAM system, and then give overviews of a series of 2HAM results. Finally, we briefly introduce a wide array of more recently developed models and discuss their various tradeoffs in comparison to the aTAM and each other.
Matthew J. Patitz
### Spatial Computing in MGS
This short paper motivates and introduces the tutorial on MGS and spatial computing presented at UCNC 2012.
Antoine Spicher, Olivier Michel, Jean-Louis Giavitto
### P Systems Controlled by General Topologies
In this paper we investigate the use of general topological spaces as control mechanisms for basic classes of membrane systems employing only rewrite and communication rules.
Erzsébet Csuhaj-Varjú, Marian Gheorghe, Mike Stannett
### P Systems with Minimal Left and Right Insertion and Deletion
In this article we investigate the operations of insertion and deletion performed at the ends of a string. We show that using these operations in a P systems framework (which corresponds to using specific variants of graph control), computational completeness can even be achieved with the operations of left and right insertion and deletion of only one symbol.
Rudolf Freund, Yurii Rogozhin, Sergey Verlan
### Lower Bounds on the Complexity of the Wavelength-Based Machine
The optical wavelength-based machine, or simply w-machine, is a computational model designed based on physical properties of light. The machine deals with sets of binary numbers, and performs computation using four defined basic operations. The sets are implemented as light rays and wavelengths are considered as binary numbers. Basic operations are then implemented using simple optical devices.
In this paper, we have provided a polynomial lower bound on the complexity of any w-machine computing all satisfiable SAT formulas. We have shown that the provided lower bound is tight by providing such a w-machine. Although the size complexity of the SAT problem on w-machine is polynomial, but, according to the provided optical implementation, it requires exponential amount of energy to be computed.
We have also provided an exponential lower bound on the complexity of most of w-machine languages, by showing that when n tends to infinity, the ratio of n-bit languages requiring exponential size w-machine to be computed, to the number of all n-bit languages, converges to 1.
### String Matching with Involutions
We propose a novel algorithm for locating in a text T every occurrence of a string that can be obtained from a given pattern P by successively applying antimorphic involutions on some of its factors. When the factors on which these involutions are applied overlap, a linear time algorithm is obtained. When we apply the involutions to non-overlapping factors we obtain an algorithm running in $${\mathcal{O}}(|T||P|)$$ time and $${\mathcal{O}}(|P|)$$ space, in the worst case. We also improve the latter algorithm to achieve linear average running time, when the alphabet of the pattern is large enough.
Cristian Grozea, Florin Manea, Mike Müller, Dirk Nowotka
### Distributed Execution of Automata Networks on a Computing Medium: Introducing IfAny Machines
A computing medium is a set of Processing Elements (PE) homogeneously distributed in space, with connections local in space. PEs are fine grain, and are therefore modeled as Finite State Machine (FSM). In this elementary framework, the interaction between PEs can be defined by a set of instructions, which return a value depending on the neighbor’s state. That value is then used as an input to the FSM. This paper studies an instruction set reduced to a single instruction called “IfAny q” that tests IfAny of the neighbors has a given state q. This instruction puts a minimal requirement on hardware: there is no need for addressing channels, communication can be done by local radio broadcasting. An IfAny machine A running on a network tailored for a specific computational task can be executed in parallel on an IfAny medium whose network is fixed and reflects the locality in space. The execution involves an embedding of A’s network, and a transformation of A’s FSM, adding a 3 states register. We analyse the example of A realizing the addition of n binary numbers. With a carefully chosen network embedding, the resulting parallel execution is optimal in time and space with respect to VLSI complexity.
This work demonstrates that IfAny machines can be seen as a rudimentary programming method for computing media. It represents a first step of our long term project which is to realize general purpose parallel computation on a computing medium.
Frederic Gruau, Luidnel Maignan
### Symbol Representations in Evolving Droplet Computers
We investigate evolutionary computation approaches as a mechanism to program networks of excitable chemical droplets. For this kind of systems, we assigned a specific task and concentrated on the characteristics of signals representing symbols. Given a Boolean function like Identity, OR, AND, NAND, XOR, XNOR or the half-adder as the target functionality, 2D networks composed of 10×10 droplets were considered in our simulations. Three different setups were tested: Evolving network structures with fixed on/off rate coding signals, coevolution of networks and signals, and network evolution with fixed but pre-evolved signals. Evolutionary computation served in this work not only for designing droplet networks and input signals but also to estimate the quality of a symbol representation: We assume that a signal leading to faster evolution of a successful network for a given task is better suited for the droplet computing infrastructure. Results show that complicated functions like XOR can evolve using only rate coding and simple droplet types, while other functions involving negations like the NAND or the XNOR function evolved slower using rate coding. Furthermore we discovered symbol representations that performed better than the straight forward on/off rate coding signals for the XNOR and AND Boolean functions. We conclude that our approach is suitable for the exploration of signal encoding in networks of excitable droplets.
Gerd Gruenert, Gabi Escuela, Peter Dittrich
### Inductive Complexity of Goodstein’s Theorem
We use the recently introduced [1, 2] inductive complexity measure to evaluate the inductive complexity of Goodstein’s Theorem, a statement that is independent from Peano Arithmetic.
Joachim Hertel
### Towards a Biomolecular Learning Machine
Learning and generalisation are fundamental behavioural traits of intelligent life. We present a synthetic biochemical circuit which can exhibit non-trivial learning and generalisation behaviours, which is a first step towards demonstrating that these behaviours may be realised at the molecular level. The aim of our system is to learn positive real-valued weights for a real-valued linear function of positive inputs. Mathematically, this can be viewed as solving a non-negative least-squares regression problem. Our design is based on deoxyribozymes, which are catalytic DNA strands. We present simulation results which demonstrate that the system can converge towards a desired set of weights after a number of training instances are provided.
Matthew R. Lakin, Amanda Minnich, Terran Lane, Darko Stefanovic
### Tractional Motion Machines: Tangent-Managing Planar Mechanisms as Analog Computers and Educational Artifacts
Concrete and virtual machines play a central role in the both Unconventional Computing (machines as computers) and in Math Education (influence of artifacts on reaching/producing abstract thought). Here we will examine some fallouts in these fields for the Tractional Motion Machines, planar mechanisms based on some devices used to plot the solutions of differential equations by the management of the tangent since the late 17th century.
Pietro Milici
### Computing with Sand: On the Complexity of Recognizing Two-dimensional Sandpile Critical Configurations
In this work we study the complexity of recognizing the critical configurations of The Two-dimensional Abelian Sandpile Model, we review some known facts and we prove that there does not exist a polylog-depth uniform polynomial size family of monotone boolean circuits solving this problem, this result suggests that the recognition of critical configurations cannot be accomplished in polylog time employing a polynomial number of processors.
J. Andres Montoya
### Genome Parameters as Information to Forecast Emergent Developmental Behaviors
In this paper we measure genomic properties in EvoDevo systems, to predict emergent phenotypic characteristic of artificial organisms. We describe and compare three parameters calculated out of the composition of the genome, to forecast the emergent behavior and structural properties of the developed organisms. The parameters are each calculated by including different genomic information. The genotypic information explored are: purely regulatory output, regulatory input and relative output considered independently and an overall parameter calculated out of genetic dependency properties. The goal of this work is to gain more knowledge on the relation between genotypes and the behavior of emergent phenotypes. Such knowledge will give information on genetic composition in relation to artificial developmental organisms, providing guidelines for construction of EvoDevo systems. A minimalistic developmental system based on Cellular Automata is chosen in the experimental work.
Stefano Nichele, Gunnar Tufte
### Heterotic Computing Examples with Optics, Bacteria, and Chemicals
Unconventional computers can perform embodied computation that can directly exploit the natural dynamics of the substrate. But such in materio devices are often limited, special purpose machines. To be practically useful, unconventional devices are usually be combined with classical computers or control systems. However, there is currently no established way to do this, or to combine different unconventional devices.
In this position paper we describe heterotic unconventional computation, an approach that focusses on combinations of unconventional devices. This will need a sound semantic framework defining how diverse unconventional computational devices can be combined in a way that respects the intrinsic computational power of each, whilst yielding a hybrid device that is capable of more than the sum of its parts. We also describe a suite of diverse physical implementations of heterotic unconventional computers, comprising computation performed by bacteria hosted in chemically built material, sensed and controlled optically and chemically.
Susan Stepney, Samson Abramsky, Matthias Bechmann, Jerzy Gorecki, Viv Kendon, Thomas J. Naughton, Mario J. Perez-Jimenez, Francisco J. Romero-Campero, Angelika Sebald
### Reliable Node Placement in Wireless Sensor Networks Using Cellular Automata
Wireless sensor networks are often used to provide critical measurements in unattended harsh environments. They should be designed to adequately monitor their surroundings while being resilient to environmental changes. Appropriate sensor node placement greatly influences their capability to perform this task. Cellular automata have properties very similar to those of wireless sensor networks. In this paper, we present a sensor node placement algorithm that runs on a cellular automaton and achieves adequate coverage, connectivity and sparsity while being resilient to changing environmental conditions.
Sami Torbey, Selim G. Akl
### Robust Evaluation of Expressions by Distributed Virtual Machines
We show how expressions written in a functional programming language can be robustly evaluated on a modular asynchronous spatial computer by compiling them into a distributed virtual machine comprised of reified bytecodes undergoing diffusion and communicating via messages containing encapsulated virtual machine states. Because the semantics of the source language are purely functional, multiple instances of each reified bytecode and multiple execution threads can coexist without inconsistency in the same distributed heap.
Lance R. Williams
### Numerical Evaluation of the Average Number of Successive Guesses
This work has been inspired by problems addressed in the field of computer security, where the attacking of, e.g., password systems is an important issue. In [2] Lundin et al. discuss measures related to the number of guesses or attempts a supposed attacker needs for revealing information. Here several numerical approaches are discussed for evaluating the average number of successive guesses required for correctly guessing the value of a string of independent and identically-distributed random variables. The guessing strategy used is guessing strings in decreasing order of probability [1].
### Discrete Discs and Broadcasting Sequences
Neighbourhood Sequences are deemed to be important in many practical applications within digital imaging through their application in measuring digital distance.
Aggregation of neighbourhood sequences based on classical digital distance functions was proposed as an alternative method for organising swarms or robots on the non-oriented grid environment in [1]. Wave phenomena generated nodal patterns in a discrete environment via the two neighbourhood sequences providing a distributed algorithm to find the centre of a digital disc. The geometric shapes that can be formed by such sequences in 2-D are quite limited and so constraints are relaxed to allow any two points at euclidean distance r (r-neighbours) such neighbourhoods represented by the digital disc of radius r.
Thomas Nickson, Igor Potapov
### Optical Analog Feedback in Euglena-Based Neural Network Computing
Using living microbial cells in computational processing is a fascinating challenge to incorporate their autonomous adaptation and exploration abilities into a physical computing algorithm [1]. When the stimulus to the cells is given as analog values, more flexible solutions would be expected in microbe-based neurocomputing [1] owing to the diversity of reaction threshold among the cells. We have investigated the optical analog feedback in Euglena-based neurocomputing, for a task to select some from 16 compartments with avoiding the first and second nearest compartments [2].
Kazunari Ozasa, Jeesoo Lee, Simon Song, Mizuo Maeda, Masahiko Hara
### Gardening Cyber-Physical Systems
Today’s artefacts, from small devices to buildings and cities, are, or are becoming, cyber-physical socio-technical systems, with tightly interwoven material and computational parts. Currently, we have to laboriously build such systems, component by component, and the results are often difficult to maintain, adapt, and reconfigure. Even “soft” ware is brittle and non-trivial to adapt and change.
Susan Stepney, Ada Diaconescu, René Doursat, Jean-Louis Giavitto, Taras Kowaliw, Ottoline Leyser, Bruce MacLennan, Olivier Michel, Julian F. Miller, Igor Nikolic, Antoine Spicher, Christof Teuscher, Gunnar Tufte, Francisco J. Vico, Lidia Yamamoto
### Towards a Theory of Self-constructing Automata
Self constructing automata (SCA) are automata which construct their own state set on the fly. Here, we do not provide a class of automata, but rather a perspective on automata: we can reconstruct any class of automata as class of SCA. An SCA is defined by 1. an input alphabet Σ and a state alphabet Ω, 2. a map $$\phi:\Sigma(\cup\epsilon)\rightarrow \wp(\Omega\times\Omega)$$; this map is homomorphically extended over strings and interprets concatenation as relation composition; and 3. an accepting relation F ⊆ i×Ω*. For $$\mathfrak{A}$$ an SCA, put $$L(\mathfrak{A})=\{w:\phi(w)\cap F\neq\emptyset\}$$.
Christian Wurm
### Flower Pollination Algorithm for Global Optimization
Flower pollination is an intriguing process in the natural world. Its evolutionary characteristics can be used to design new optimization algorithms. In this paper, we propose a new algorithm, namely, flower pollination algorithm, inspired by the pollination process of flowers. We first use ten test functions to validate the new algorithm, and compare its performance with genetic algorithms and particle swarm optimization. Our simulation results show the flower algorithm is more efficient than both GA and PSO. We also use the flower algorithm to solve a nonlinear design benchmark, which shows the convergence rate is almost exponential.
Xin-She Yang
### Backmatter
Weitere Informationen
## BranchenIndex Online
Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.
## Whitepaper
- ANZEIGE -
### Globales Erdungssystem in urbanen Kabelnetzen
Bedingt durch die Altersstruktur vieler Kabelverteilnetze mit der damit verbundenen verminderten Isolationsfestigkeit oder durch fortschreitenden Kabelausbau ist es immer häufiger erforderlich, anstelle der Resonanz-Sternpunktserdung alternative Konzepte für die Sternpunktsbehandlung umzusetzen. Die damit verbundenen Fehlerortungskonzepte bzw. die Erhöhung der Restströme im Erdschlussfall führen jedoch aufgrund der hohen Fehlerströme zu neuen Anforderungen an die Erdungs- und Fehlerstromrückleitungs-Systeme. Lesen Sie hier über die Auswirkung von leitfähigen Strukturen auf die Stromaufteilung sowie die Potentialverhältnisse in urbanen Kabelnetzen bei stromstarken Erdschlüssen. Jetzt gratis downloaden!
Bildnachweise | 2019-02-18 10:04:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5267701745033264, "perplexity": 2367.48021432233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484928.52/warc/CC-MAIN-20190218094308-20190218120308-00130.warc.gz"} |
https://de.maplesoft.com/support/help/view.aspx?path=Statistics%2FDistributions%2FNonCentralChiSquare | NonCentralChiSquare - Maple Help
Statistics[Distributions]
NonCentralChiSquare
noncentral chi-square distribution
Calling Sequence NonCentralChiSquare(nu, delta) NonCentralChiSquareDistribution(nu, delta)
Parameters
nu - degrees of freedom delta - noncentrality parameter
Description
• The noncentral chi-square distribution is a continuous probability distribution with probability density function given by:
$f\left(t\right)=\left\{\begin{array}{cc}0& t<0\\ \frac{{ⅇ}^{-\frac{t}{2}-\frac{\mathrm{\delta }}{2}}{t}^{\frac{\mathrm{\nu }}{2}-1}\mathrm{BesselI}\left(\frac{\mathrm{\nu }}{2}-1,\sqrt{\mathrm{\delta }t}\right)}{2{\left(\mathrm{\delta }t\right)}^{\frac{\mathrm{\nu }}{4}-\frac{1}{2}}}& \mathrm{otherwise}\end{array}\right\$
subject to the following conditions:
$0<\mathrm{\nu },0\le \mathrm{\delta }$
• The NonCentralChiSquare variate with noncentrality parameter delta=0 and degrees of freedom nu is equivalent to the ChiSquare variate with degrees of freedom nu.
• Note that the NonCentralChiSquare command is inert and should be used in combination with the RandomVariable command.
Notes
• The Quantile and CDF functions applied to a noncentral chi-square distribution use a sequence of iterations in order to converge on the desired output point. The maximum number of iterations to perform is equal to 100 by default, but this value can be changed by setting the environment variable _EnvStatisticsIterations to the desired number of iterations.
Examples
> $\mathrm{with}\left(\mathrm{Statistics}\right):$
> $X≔\mathrm{RandomVariable}\left(\mathrm{NonCentralChiSquare}\left(\mathrm{\nu },\mathrm{\delta }\right)\right):$
> $\mathrm{PDF}\left(X,u\right)$
$\left\{\begin{array}{cc}{0}& {u}{<}{0}\\ \frac{{{ⅇ}}^{{-}\frac{{u}}{{2}}{-}\frac{{\mathrm{\delta }}}{{2}}}{}{{u}}^{\frac{{\mathrm{\nu }}}{{2}}{-}{1}}{}{\mathrm{hypergeom}}{}\left(\left[\right]{,}\left[\frac{{\mathrm{\nu }}}{{2}}\right]{,}\frac{{\mathrm{\delta }}{}{u}}{{4}}\right)}{{\mathrm{\Gamma }}{}\left(\frac{{\mathrm{\nu }}}{{2}}\right){}{{2}}^{\frac{{\mathrm{\nu }}}{{2}}}}& {\mathrm{otherwise}}\end{array}\right\$ (1)
> $\mathrm{PDF}\left(X,\frac{1}{2}\right)$
$\frac{{{ⅇ}}^{{-}\frac{{1}}{{4}}{-}\frac{{\mathrm{\delta }}}{{2}}}{}{\left(\frac{{1}}{{2}}\right)}^{\frac{{\mathrm{\nu }}}{{2}}{-}{1}}{}{\mathrm{hypergeom}}{}\left(\left[\right]{,}\left[\frac{{\mathrm{\nu }}}{{2}}\right]{,}\frac{{\mathrm{\delta }}}{{8}}\right)}{{\mathrm{\Gamma }}{}\left(\frac{{\mathrm{\nu }}}{{2}}\right){}{{2}}^{\frac{{\mathrm{\nu }}}{{2}}}}$ (2)
> $\mathrm{Mean}\left(X\right)$
${\mathrm{\nu }}{+}{\mathrm{\delta }}$ (3)
> $\mathrm{Variance}\left(X\right)$
${2}{}{\mathrm{\nu }}{+}{4}{}{\mathrm{\delta }}$ (4)
References
Evans, Merran; Hastings, Nicholas; and Peacock, Brian. Statistical Distributions. 3rd ed. Hoboken: Wiley, 2000.
Johnson, Norman L.; Kotz, Samuel; and Balakrishnan, N. Continuous Univariate Distributions. 2nd ed. 2 vols. Hoboken: Wiley, 1995.
Stuart, Alan, and Ord, Keith. Kendall's Advanced Theory of Statistics. 6th ed. London: Edward Arnold, 1998. Vol. 1: Distribution Theory. | 2022-09-30 22:45:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9762545824050903, "perplexity": 2409.8677483243264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00257.warc.gz"} |
https://www.cis.upenn.edu/~danroth/Teaching/CIS-700-006/index.html | ### Course Description
Making decisions in natural language processing problems often involves assigning values to sets of interdependent variables where the expressive dependency structure can influence, or even dictate, what assignments are possible. Structured learning problems such as semantic role labeling provide one such example, but the setting is broader and includes a range of problems such as name entity and relation recognition and co-reference resolution. The setting is also appropriate for cases that may require a solution to make use of multiple models (possible pre-designed or pre-learned components) as in summarization, textual entailment and question answering.
This semester, we will devote the course to the study of structured learning problems in natural language processing. We will start by recalling the standard” learning formulations as used in NLP, move to formulations of multiclass classification and from then on focus on models of structure predictions and how they are being used in NLP.
Through lectures and paper presentations the course will introduce some of the central learning frameworks and techniques that have emerged in this area over the last few years, along with their application to multiple problems in NLP and Information Extraction. The course will cover:
Models: We will present both discriminative models such as structured Perceptron and Structured SVM, Probabilistic models, and Constrained Conditional Models.
Training Paradigms: Joint Learning models; Decoupling learning from Inference; Constrained Driven Learning; Semi-Supervised Learning of Structure; Indirect Supervision
Inference: Constrained Optimization Models, Integer Linear Programming, Approximate Inference, Dual Decomposition.
### Prerequisites
Machine Learning class; CIS 419/519/520 or equivalent. Knowledge of NLP is recommended but not mandatory.
There will be
• Course Projects (40%) - The project will be done in teams of sizes 2 or 3; teams will proposed projects and consult us. We will have milestones along define a few intermediate stages and results will be reported and presented at the end of each stage.
• Critical Surveys ( 6 + 6 + 6 + 12 = 30% ) - Four (4) times a semester you will write a short critical essay on one of the additional readings.
• Presentations ( 20% ) - Once or twice you will present a paper from the additional readings (30 minutes, focusing on the mathematical/technical details of the paper). The presentations will be prepared in groups, whenever possible, and a group of presentations will form a coherent tutorial, whenever possible (more on that later).
• Class Participation ( 10% )
There is no final exam.
#### Expectations
This is an advanced course. I view my role as guiding you through the material and helping you in your first steps as an researcher. I expect that your participation in class, reading assignments and presentations will reflect independence, mathematical rigor and critical thinking. | 2021-10-28 12:26:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2646293640136719, "perplexity": 1708.8550363955428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00098.warc.gz"} |
https://www.nag.com/numeric/py/nagdoc_latest/naginterfaces.library.lapacklin.dgttrf.html | naginterfaces.library.lapacklin.dgttrf¶
naginterfaces.library.lapacklin.dgttrf(n, dl, d, du)[source]
dgttrf computes the factorization of a real tridiagonal matrix .
For full information please refer to the NAG Library document for f07cd
https://www.nag.com/numeric/nl/nagdoc_27.3/flhtml/f07/f07cdf.html
Parameters
nint
, the order of the matrix .
dlfloat, array-like, shape
Must contain the subdiagonal elements of the matrix .
dfloat, array-like, shape
Must contain the diagonal elements of the matrix .
dufloat, array-like, shape
Must contain the superdiagonal elements of the matrix .
Returns
dlfloat, ndarray, shape
Is overwritten by the multipliers that define the matrix of the factorization of .
dfloat, ndarray, shape
Is overwritten by the diagonal elements of the upper triangular matrix from the factorization of .
dufloat, ndarray, shape
Is overwritten by the elements of the first superdiagonal of .
du2float, ndarray, shape
Contains the elements of the second superdiagonal of .
ipivint, ndarray, shape
Contains the pivot indices that define the permutation matrix . At the th step, row of the matrix was interchanged with row . will always be either or , indicating that a row interchange was not performed.
Raises
NagValueError
(errno )
On entry, error in parameter .
Constraint: .
Warns
NagAlgorithmicWarning
(errno )
Element of the diagonal is exactly zero. The factorization has been completed, but the factor is exactly singular, and division by zero will occur if it is used to solve a system of equations.
Notes
dgttrf uses Gaussian elimination with partial pivoting and row interchanges to factorize the matrix as
where is a permutation matrix, is unit lower triangular with at most one nonzero subdiagonal element in each column, and is an upper triangular band matrix, with two superdiagonals.
References
Anderson, E, Bai, Z, Bischof, C, Blackford, S, Demmel, J, Dongarra, J J, Du Croz, J J, Greenbaum, A, Hammarling, S, McKenney, A and Sorensen, D, 1999, LAPACK Users’ Guide, (3rd Edition), SIAM, Philadelphia, https://www.netlib.org/lapack/lug | 2021-10-19 08:21:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9545109868049622, "perplexity": 3362.7185142320254}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585246.50/warc/CC-MAIN-20211019074128-20211019104128-00466.warc.gz"} |