text
stringlengths 104
605k
|
---|
# Remove photo padding in moderncv classic
When \photo is used in moderncv, it includes a ~3px padding between the image and the border. How does one remove this? I have found the part of the moderncvstyleclassic.sty that deals with the photo (line 160), but don't know how to change it. I'd also like to know how to change the colour of the border, or even remove it entirely.
-
Please add a minimal working example (MWE) that illustrates your problem. It will be much easier for us to reproduce your situation and find out what the issue is when we see compilable code, starting with \documentclass{...} and ending with \end{document}. – Marco Daniel May 10 '13 at 9:09
It's also very important to add you version of moderncv. In the last weeks there are many updates. – Marco Daniel May 10 '13 at 9:10
The current macro \photo has two options, one for the photo size, one for the thickness of the border. To remove the border use 0pt. – Kurt May 10 '13 at 10:44
Guess I'm using an old version of moderncv then T_T – user973066 May 10 '13 at 11:51
To remove the border for the border just use a current version of moderncv and try the following MWE (just to show you how a MWE could be for your case). In line 21 you have the border activated, in the commented line 22 the border is deactivated. Just move the comment sign %.
The macro \photo[width photo][border thickness]{name of photo} does for you what you want.
Please note that I used a picture from package MWE, that should be installed on your computer but must not be loaded to use the included pictures.
%http://tex.stackexchange.com/questions/113566/remove-photo-padding-in-moderncv-classic
\documentclass[11pt,a4paper,sans]{moderncv}
\moderncvstyle{casual}
\moderncvcolor{blue}
\usepackage[scale=0.75]{geometry}
%\setlength{\hintscolumnwidth}{3cm} % change the width of the column with the dates
\setlength{\footskip}{37pt} % defines space for footer
% personal data
\firstname{John}
\familyname{Doe}
\title{Resumé title} % optional, remove / comment the line if not wanted
\address{street and number}{postcode city}{country}% optional, ...
\mobile{+1~(234)~567~890} % optional, ...
\phone{+2~(345)~678~901} % optional, ...
\fax{+3~(456)~789~012} % optional, ...
\email{john@doe.org} % optional, ...
\homepage{www.johndoe.com} % optional, ... |
# Unconditional versus conditional Welfare measure
Dear all,
I have a question regarding the proper way of calculating the value of both the unconditional and conditional (upon starting from the steady state) welfare measure. From what I see after browsing similar questions here in the forum, in Dynare the unconditional welfare corresponds to the unconditional ergodic mean of the welfare variable:
Welfare_pos=strmatch(‘Welfare’,M_.endo_names,‘exact’);
Welfare_uncon=oo_.mean(Welfare_pos);
By contrast, the conditional welfare criterion (upon starting in the steady state) is given by the constant in the policy and transition functions output that Dynare produces. Guess its interpretation is…the conditional ergodic mean which in a 2nd order approximation is different from the steady state. The way I compute is:
Welfare_pos=strmatch(‘Welfare’,M_.endo_names,‘exact’);
Welfare_con = oo_.dr.ys(Welfare_pos)+0.5*oo_.dr.ghs2(oo_.dr.inv_order_var(Welfare_pos));
Is that right? I am asking since I obtain very weird values for the conditional welfare both in sign and in values as they are very different from the unconditional welfare measure.
Best,
Peter
5 Likes
Yes, that is correct. If you are in the steady state, you can use the steady state plus the uncertainty correction. You can cross check this with
initial_condition_states = repmat(oo_.dr.ys,1,M_.maximum_lag); %get steady state as initial condition
shock_matrix = zeros(1,M_.exo_nbr); %create shock matrix with number of time periods in rows
y_sim = simult_(initial_condition_states,oo_.dr,shock_matrix,options_.order); %simulate one period to get value
as in
The interpretation is that this is the welfare conditional on being in the steady state, but factoring in that the model is stochastic, i.e. that shocks in the future may happen.
1 Like
Perfect match, many thanks Johannes.
But Professor Johannes, isn’t what you just said the same as Valerio said in:
Back there you said this is a wrong way of computing conditional welfare as we can see in:
I am not sure I understand your post. For conditional welfare, you take the first element of the simulation without shocks, not the last one of a long simulation.
1 Like
Thank you for the response! That was indeed my question!
Hi Professor,
Is the unconditional welfare the risky steady state of welfare?
Best,
Yunpeng
Hi @dypisgood In my humble opinion, no.
Deterministic Steady State: when we want to compute the conditional welfare, usually we start/perturb the economy from this type of steady state. At the steady state, there are no shocks.
Risky Steady State: despite no shocks, the agent observes the distribution of the shocks that may hit the economy in future, e.g. standard deviation \sigma. In this context, we simulate long sequences of endogenous variables at a higher order to see they will converge to the risky steady state or the stochastic steady state.
Ergodic/unconditional mean: We get this value by simulating the economy with shocks after a long-run period to compute the unconditional welfare.
Yes, unconditional welfare is the average welfare with shocks.
Is unconditional welfare another name for steady-state welfare as mentioned in other posts? Seems like it.
I guess steady-state welfare is something like Wss = Uss + bet* Uss?
Assuming you solved analytically for Uss, then I guess you don’t need to use simulated data of average welfare with shocks to compute unconditional welfare, right? Thanks!!
No, welfare at the steady state is conditional welfare (conditional on the steady state). In a nonlinear model, the average over all states and shocks is not the steady state, so unconditional welfare is different.
1 Like
But the conditions underlying conditional welfare can be different, right?
Like in Gali’s book, welfare is conditioned, for example, on cost-push shocks, I guess. But we can also have conditional welfare conditioned on a stochastic steady-state, and conditional welfare conditioned on a deterministic steady-state and transitional path?
Whatever type is the conditional welfare, by taking the average, we get unconditional welfare? Like in Gali’s book, is this a conditional welfare loss function, conditioned on shocks?
And the average welfare loss per period would be unconditional?
Hi @HelloDynare, if I may ask. What is long-run here? If you create a variable W = U + bet* U(+1), simulate the model and compute mean of W, you get conditional welfare (per this thread Welfare computation). But if you take the mean in the long run, then you get unconditional welfare? Thanks!
The difference is about the information set of the expectations operator. Conditional welfare using conditional expectations, i.e. conditional on some information about the state today as well as potentially information on future shocks. For unconditional welfare, we use unconditional expectations.
If your system is ergodic, then the time average will correspond to expected value.
1 Like
But how long should the time be for the average of (W = U + bet* U(+1)) be to be considered unconditional assuming the system is ergodic. And how short should the time be for W to be considered conditional?
1. That’s impossible to tell ex-ante. You would need to check for convergence or compute it analytically.
2. Conditional involves evaluating welfare at a particular point in the state space today.
I am lost here. Check the convergence of? The model?
I think I get the concept. Like on the other thread. E(welfare) is unconditional welfare, and it does not depend on any conditions or shocks. E_t(welfare), on the hand, is conditional welfare. But it is still not clear to me how to do that in dynare…
Say I have policy functions for utility and welfare in the mod file:
U = (1-sigma)*C
Welfare = U+ beta*Welfare(+1)
The unconditional welfare is saved in oo_.mean after simulation. Do I need to set periods=100000? And how to get conditional welfare from dynare?
I know we can rewrite welfare functions in other ways, but generally, all oo_ variables are unconditional? Say I have a different welfare function;
variance.y=oo_.var(y_pos,y_pos);
variance.pi=oo_.var(pi_pos,pi_pos);
L=0.5*((par.siggma+(par.varphi+par.alppha)/(1-par.alppha))*variance.y_gap+par.epsilon/par.lambda*variance.pi)/100;
Then variance.y and variance.pi are unconditional variances? What would be the conditional counterparts? Sorry for a long question.
1. The convergence of the average over periods to the expected value. If it still fluctuates considerably, it has not yet converged.
2. You can use the simult_-function to compute welfare at any given point in time of your simulations. See e.g.
1 Like |
## Heisenberg Indeterminacy Equation Confusion
$\Delta p \Delta x\geq \frac{h}{4\pi }$
DTingey_1C
Posts: 55
Joined: Fri Aug 30, 2019 12:16 am
### Heisenberg Indeterminacy Equation Confusion
I don't fully understand what the equation means conceptually. I understand that the equation represent the range of momentum and location of an electron, but what does this mean? Since the location is represented by "x," does this mean it is a distance? Or is more of an area of probability. Is the estimate of the velocity (coming from the momentum) relative to the nucleus of the atom? If the electron moves in a circular pattern, is it rotational momentum that's calculated? Thank you for helping and sorry if I'm being confusing.
Kevin Antony 2B
Posts: 71
Joined: Sat Sep 07, 2019 12:16 am
### Re: Heisenberg Indeterminacy Equation Confusion
From what I understand, we don't exactly know where an electron is at a given moment so the "x" is similar to probability in the sense that we think the electron should be somewhere in that vicinity. As far as momentum goes, we appear to be calculating linear momentum as it's just mass x velocity and not rotational momentum.
I hope this gives you a little bit more clarity!
305421980
Posts: 65
Joined: Sat Sep 07, 2019 12:16 am
### Re: Heisenberg Indeterminacy Equation Confusion
For the equation, delta x represents the probability of position, so like the equation itself, it is the probability of the election being in a certain spot. In terms of momentum, we just are using non-rotational momentum relative to the mass of the electron and its velocity. |
# All Questions
12 views
6 views
### What different between huffman and EBCOT?
I have a question about entropy coding and Image compression. Huffman coding and EBCOT coding are entropy coding. correct or not? I knew about huffman coding but I don't know EBCOT, How it working?...
26 views
### Transformations of two Laplace distributions resulting in a Laplace distribution
Suppose we have two independent identical random variables $X_1$ and $X_2$ with Laplace distribution \begin{align} f_X(x)=\frac{1}{2b}e^{-\frac{|x|}{b}} \end{align} I am looking for a non-...
19 views
8 views
### Determine SE(3) transform between pair of sensors producing 2d line segments
Say we have sensorA and sensorB with an unknown transform between them, and that we have n ...
22 views
### Why does MLE work for continuous distributions?
In the attachment below you can see the definition of the likelihood function. Likelihood 1) Whilst the explanation of why the whole max likelihood method is viable for discrete distributions is ...
46 views
### What is embedding?
I am new to this so do I need to learn topology in order to understand this? Cause I come across this which says that unlike the 2D sphere, 2d saddle surface cannot be embedded in 3D Euclidean space(...
40 views
### Prove: Let $a,b\in R$ such that $a\lt b$, and $f:[a,b]\rightarrow R$ be monotonic, then $\frac{1}{f}$ is also monotonic
Prove or disprove: Let $f:[a,b]\rightarrow \mathbb{R}$ be monotone. If $f(x)\ne 0$ for all [a,b], then $1/f$ is also monotone on [a,b]. I've been sitting on this for quite a while trying to find a ...
19 views
### Inverse Matrix=echolon form of $(M|E_n)$?
Why is it, that if i want to calculate the inverse of a matrix, the echelon form of $(M|E_n)$ will give it to me? For example: $\{\{1,-2,0,1,0,0\},\{0,2,1,0,1,0\},\{-1,1,2,0,0,1\}\}$ in echelon form ...
72 views
### A convergent series: $\sum_{n=0}^\infty 3^{n-1}\sin^3\left(\frac{\pi}{3^{n+1}}\right)$
I would like to find the value of: $$\sum_{n=0}^\infty 3^{n-1}\sin^3\left(\frac{\pi}{3^{n+1}}\right)$$ I could only see that the ratio of two consecutive terms is $\dfrac{1}{27\cos(2\theta)}$.
65 views
### Generators of so(7)
Short version: Let $V$ be a 7-dimensional linear space of (real) square matrices. Suppose further that $[V,V]$ (the linear space spanned $[X,Y]$, $X,Y\in V$) is isomorphic to $\mathfrak{so}(7)$. Can ...
### Is a Blaschke product/rational function a covering map for a $n$-sheeted covering of $S^{1}$?
We have a Blaschke product $B(z)$ of order $n$ (you can think of it as a rational function with $n$ zeros and $n$ poles), the zeros are obviously inside $\mathbb{D}$. Why is \$B(z) \colon S^{1} \to S^{... |
# Eigen states
For a given operator ($H$) one can calculate the $N_{psi}$ lowest eigenstates with the function “Eigensystem()”. The function “Eigensystem()” uses iterative methods and needs as an input a starting point. This either can be a set of wavefunctions or a set of restrictions. If “Eigensystem()” is called with a set of starting functions the eigenstates found are those $N_{psi}$ with the lowest energy that have a nonzero matrix element of the operator $(H+1)^\infty$ with the starting state.
Example.Quanty
-- Eigenstates of the Lz operator
-- starting from a wavefunction
NF=6
NB=0
IndexDn={0,2,4}
IndexUp={1,3,5}
psip = NewWavefunction(NF, NB, {{"100000", math.sqrt(1/2)}, {"000010", math.sqrt(1/2)}})
OppLz = NewOperator("Lz", NF, IndexUp, IndexDn)
Eigensystem(OppLz,psip)
You do not need to specify a set of starting functions, but can specify a set of starting restrictions. If you want to find the lowest 3 eigenstates with two electrons in the $p$ shell one can set restrictions such that all orbitals in the $p$ shell are included in the counting and the occupation should be minimal 2 and maximal 2.
Example.Quanty
-- Eigenstates of the Lz operator
-- starting from a set of restrictions
NF=6
NB=0
IndexDn={0,2,4}
IndexUp={1,3,5}
OppLz = NewOperator("Lz", NF, IndexUp, IndexDn)
StartRestrictions = {NF, NB, {"111111",2,2}}
Npsi = 3
psiList = Eigensystem(OppLz, StartRestrictions, Npsi)
alligned paragraph text
## Example
description text
### Input
Example.Quanty
-- some example code |
Fubini's Theorem and Evaluating Double Integrals over Rectangles
Fubini's Theorem and Evaluating Double Integrals over Rectangles
We have just looked at Iterated Integrals over rectangles. You might now wonder how iterated integrals relate to double integrals that we looked are earlier. Fubini's Theorem gives us a relationship between double integrals and these iterated integrals.
Theorem 1 (Fubini's Theorem): Let $z = f(x, y)$ be a two variable real-valued function. If $f$ is continuous on the rectangle $R = [a, b] \times [c, d]$ then the double integral over $R$ can be computed as an iterated integrals and $\iint_{R} f(x, y) \: dA = \int_a^b \int_c^d f(x, y) \: dy \: dx = \int_c^d \int_a^b f(x, y) \: dx \: dy$.
Fubini's Theorem is critically important as it gives us a method to evaluate double integrals over rectangles without having to use the definition of a double integral directly.
Now the following corollary will give us another method for evaluating double integrals over a rectangle $R = [a, b] \times [c, d]$ provided that $f$ can be written as a product of a function in terms of $x$ and a function in terms of $y$.
Corollary 1: Let $z = f(x, y)$ be a two variable real-valued function. If $f(x, y) = g(x) h(y)$ and $f$ and $R = [a, b] \times [c, d]$ then $\iint_R f(x, y) \: dA = \iint_R g(x) h(y) \: dA = \left [ \int_a^b g(x) \: dx \right ] \left [ \int_c^d h(y) \: dy \right ]$.
Before we look at some examples of solving some double integrals, we should again be reminded of the following techniques of integration in single variable calculus that we might find useful:
It is also important to note that when evaluating iterated integrals, we can choose whether we want to integrate with respect to $x$ or $y$ first respectively as we saw on the Evaluating Iterated Integrals Over Rectangles page. It is always important to acknowledge that partial integrating with respect to a certain variable first may be easier than the other variable.
Now let's look at some examples of evaluating double integrals over rectangles.
Example 1
Evaluate $\iint_R xy + y^2 \: dA$ where $R = [0, 1] \times [1, 2]$.
By Fubini's Theorem we have that:
(1)
\begin{align} \quad \iint_R xy + y^2 \: dA = \int_0^1 \int_1^2 xy + y^2 \: dy \: dx \end{align}
Now let's evaluate the inner integral $\int_1^2 xy + y^2 \: dy$ first while holding $x$ as fixed:
(2)
\begin{align} \quad \int_1^2 xy + y^2 \: dy = \left [ \frac{xy^2}{2} + \frac{y^3}{3} \right ]_1^2 = \left ( 2x + \frac{8}{3} \right ) - \left ( \frac{x}{2} + \frac{1}{3} \right ) = \frac{3x}{2} + \frac{7}{3} \end{align} And so we have that: (3) \begin{align} \quad \int_0^1 \int_1^2 xy + y^2 \: dy \: dx = \int_0^1 \frac{3x}{2} + \frac{7}{3} \: dx = \left [ \frac{3x^2}{4} + \frac{7x}{3} \right ]_0^1 = \frac{3}{4} + \frac{7}{3} = \frac{37}{12} \end{align} Example 2 Evaluate\iint_R e^x \cos y \: dA$where$R = [0, 1] \times \left [ \frac{\pi}{2} , \pi \right ]$. Note that$f(x, y) = e^x \cos y$can be written as the product of a function of$x$and a function of$y$if we let$g(x) = e^x$and$h(y) = \cos y$(then$f(x, y) = g(x) h(y)\$). Therefore applying Corollary 1 and we get that:
(4)
\begin{align} \quad \iint_R e^x \cos y \: dA = \left [ \int_0^1 e^x \: dx \right ] \left [ \int_{\frac{\pi}{2}}^{\pi} \cos y \: dy \right ] = [ e - 1 ] [ -1] = 1 - e \end{align} |
Geodesic proof
Homework Statement
proof that shortest path between two points on a sphere is a great circle.
Homework Equations
Euler-Lagrange and variational calculus
The Attempt at a Solution
in sphereical coords:
N.B. $$\dot{\phi} = \frac{d\phi}{d\theta}$$
$$ds = \sqrt{r^{2}d\theta^{2} +r^{2}sin^{2}\theta d\phi}$$
s = $$\int^{x_{1}}_{x_{2}} ds = \int^{x_{1}}_{x_{2}} r \sqrt{1 +sin^{2}\theta \dot{\phi}} d\theta$$
$$f = \sqrt{1 +sin^{2}\theta \dot{\phi}^{2}}$$
$$\frac{d}{d\theta}\frac{\partial f}{\partial \dot{\phi}} = 0$$
$$\frac{\partial f}{\partial \dot{\phi}} = const = c$$
ok, let's rearrange...
$$\dot{\phi} = \frac{c}{\sqrt{r^{2} - c^{2}sin^{2}\theta}}$$
so let's substitute in s...
s = $$\int^{x_{1}}_{x_{2}} r \sqrt{1 +sin^{2}\theta \frac{c^2}{r^{2} - c^{2}sin^{2}\theta} d\theta$$
s = $$\int^{x_{1}}_{x_{2}} r^{2} \frac{d\theta}{r^{2} - c^{2}sin^{2}\theta}$$
but i can't integrate that, so what to do?
Answers and Replies
Related Calculus and Beyond Homework Help News on Phys.org |
# pgfplots: drawing multiple vectors with corresponding unit vectors
Suppose you have a couple of 3D-vectors given by their start and endpoints. Now you want to define a command that draws one of these vectors together with its respective unit vector from the same origin.
A similar problem has been answered before here - however the solution only works for vectors starting from (0,0,0).
As I struggled to apply the presented \foreach-approach on two given coordinates simultaniously and given the age of the solution, I approached this problem using the calculator-package.
I'm getting a functioning output for a single vector - unfortunately only the last unit vector (called by \vecuvec{1,1,0}{0,2,1}) is getting drawn correctly for multiple executions of \vecuvec as the coordinates (\sola,\solb,\solc) seem to be overwritten by the last execution of \vecuvec.
Is there a practical alternative by for example storing/accessing the coordinates in a different way or changing the scopes within the command \vecuvec?
## MWE:
\documentclass{article}
\usepackage{calculator}
\usepackage[]{pgfplots}
\usepackage{tikz-3dplot}
\pgfplotsset{compat=1.15}
% Draw vector and corresponding unitvector
\newcommand{\vecuvec}[2] %start point, end point (of vector)
{ \VECTORSUB(#2)(#1)(\sola,\solb,\solc)
\UNITVECTOR(\sola, \solb, \solc)(\sola,\solb,\solc)
%arrow in blue
\draw[->,thick,blue] (#1) -- (#2);
%corresponding unit-vector in red:
\draw[->, thick,red] (#1) -- ($(#1)+(\sola,\solb,\solc)$);
}
\begin{document}
\begin{tikzpicture}
\begin{axis}[xtick={0,1,...,4}, ytick={0,1,...,4}, ztick={0,1,...,4},
xmin=0,xmax=4,ymin=0,ymax=4, zmin=0,zmax=4]
\vecuvec{3,1,0}{4,2,1};
\vecuvec{2,2,2}{4,3,3};
\vecuvec{1,1,0}{0,2,1}; %only the last one works as intended
\end{axis}
\end{tikzpicture}
\end{document}
## Output:
Here for the blue vectors only the leftmost has its unit vector drawn correctly (in red).
• Have you tried reading the docs? (Look for axis direction cs) – Henri Menke Dec 21 '17 at 23:54
• @HenriMenke Perhaps I'm missing something, but how does that help here? – Torbjørn T. Dec 22 '17 at 0:03
• @TorbjørnT. (#1) -- ($(#1)+(\sola,\solb,\solc)$) is a relative coordinate calculation which does not work like this in pgfplots. For relative coordinates you need axis direction cs as explained in the manual. So I guess the correct behaviour would be obtained by (#1) -- ++(axis direction cs:\sola,\solb,\solc) – Henri Menke Dec 22 '17 at 0:05
• @HenriMenke a) No it's not, you're thinking about (#1) -- +(\sola,\solb,\solc), which is a different thing from what the OP is using (calc library syntax). b) The problem here is I, would think, delayed expansion, similar to why you need special care for loops inside an axis. Note that the third of those work fine, and that the other two unit vectors are the same as the third. Remove the third, and the second works fine. – Torbjørn T. Dec 22 '17 at 0:14
The same trick used for \draw macros in loops (see pgfplots manual section 8.1 Utility commands) works here as well it seems, i.e.
\edef\temp{\noexpand\draw[->, thick,red] (#1) -- ($(#1)+(\sola,\solb,\solc)$);}
\temp
which causes immediate expansion of \sola etc.
\documentclass{article}
\usepackage{calculator}
\usepackage{pgfplots}
\pgfplotsset{compat=1.15}
% Draw vector and corresponding unitvector
\newcommand{\vecuvec}[2] %start point, end point (of vector)
{ \VECTORSUB(#2)(#1)(\sola,\solb,\solc)
\UNITVECTOR(\sola, \solb, \solc)(\sola,\solb,\solc)
%arrow in blue
\draw[->,thick,blue] (#1) -- (#2);
%corresponding unit-vector in red:
\edef\temp{\noexpand\draw[->, thick,red] (#1) -- ($(#1)+(\sola,\solb,\solc)$);}
\temp
}
\begin{document}
\begin{tikzpicture}
\begin{axis}[xtick={0,1,...,4}, ytick={0,1,...,4}, ztick={0,1,...,4},
xmin=0,xmax=4,ymin=0,ymax=4, zmin=0,zmax=4]
\vecuvec{3,1,0}{4,2,1};
\vecuvec{2,2,2}{4,3,3};
\vecuvec{1,1,0}{0,2,1};
\end{axis}
\end{tikzpicture}
\end{document} |
LilyPond — Snippets
This document shows a selected set of LilyPond snippets from the LilyPond Snippet Repository (LSR). It is in the public domain. We would like to address many thanks to Sebastiano Vigna for maintaining LSR web site and database, and the University of Milano for hosting LSR. Please note that this document is not an exact subset of LSR: some snippets come from ‘input/new’ LilyPond sources directory, and snippets from LSR are converted through `convert-ly`, as LSR is based on a stable LilyPond version, and this document is for version 2.19.21. Snippets are grouped by tags; tags listed in the table of contents match a section of LilyPond notation manual. Snippets may have several tags, and not all LSR tags may appear in this document. In the HTML version of this document, you can click on the file name or figure for each example to see the corresponding input file.
For more information about how this manual fits with the other documentation, or to read this manual in other formats, see Manuals. If you are missing any manuals, the complete documentation can be found at http://www.lilypond.org/. |
### Type B anomalies (Mis-)Matching
In this talk we analyse several aspects related to type B conformal anomalies associated with Coulomb branch operators in 4d N=2 SCFTs. In particular, when the vacuum preserves the conformal symmetry, these anomalies coincide with the two point function coefficients in the Coulomb branch chiral ring. We analyse the behaviour of these anomalies on the Higgs branch, where conformal symmetry is spontaneously broken. We review the argument developed in arXiv:1911.05827 and, following it, we argue that these anomalies are covariantly constant on conformal manifolds. In some cases this can be used to show that the anomalies match in the broken and unbroken phases. Then, in the second part of the talk, we focus on some specific 4d N=2 SCFTs and we test type B anomaly (Mis-)Matching through an explicit Feynman diagram computation. We finally observe that an implication of Type B anomaly Mismatching is the existence of a second covariantly constant metric on the conformal manifold that imposes restrictions on its holonomy group.
Zoom Meeting ID: 998-7902-4130
3rd of November 2020, 14:30 |
Building VariantAnnotation on Windows - Makevars.win error
1
1
Entering edit mode
simon ▴ 10
@simon-8024
Last seen 6.4 years ago
United States
When installing VariantAnnotation from source in windows, I get the following error:
Makevars.win:4: C:/Users/Simon: No such file or directory
Makevars.win:4: Coetzee/Documents/R/win-library/3.4/Rsamtools/usretc/i386/Rsamtools.mk: No such file or directory
It seems to have an issue with spaces in the path names, if I modify the Makevars.win to point to the path thusly:
C:/Users/Simon\ Coetzee/Documents/R/win-library/3.4/Rsamtools/usretc/x64/Rsamtools.mk
note the backslash between my names, All is compiled fine. I don't know how to make
system.file()
or
file.path()
produce the escape sequence to make this a universal solution.
windows variantannotation makevars.win • 806 views
ADD COMMENT
0
Entering edit mode
@martin-morgan-1513
Last seen 13 days ago
United States
Thanks; on windows I think the output of system.file() can be passed to shortPathName(). I'll update VariantAnnotation (and the instructions in Rsamtools).
ADD COMMENT
Login before adding your answer.
Traffic: 427 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats
Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by the version 2.3.6 |
# What is the discriminant of 2x^2 + x - 1 = 0 and what does that mean?
Jul 26, 2015
Solve 2x^2 + x - 1 = 0
#### Explanation:
$D = {d}^{2} = {b}^{2} - 4 a c = 1 + 8 = 9$ --> $d = \pm 3$
This means there are 2 real roots (2 x-intercepts)
$x = - \frac{b}{2 a} \pm \frac{d}{2 a} .$
$x = - \frac{1}{4} \pm \frac{3}{4}$ --> x = -1 and $x = \frac{1}{2}$
Jul 26, 2015
The discriminant is $9$.
A positive discriminant means that there are two real roots (x-intercepts).
Also, since the discriminant is a perfect square, the two roots are rational.
#### Explanation:
$2 {x}^{2} + x - 1 = 0$ is an quadratic equation in the form of $a {x}^{2} + b x + c$, where $a = 2 , b = 1 , \mathmr{and} c = - 1$.
The formula for the discriminant, $\text{D}$, comes from the quadratic formula, $x = \frac{- b \pm \sqrt{\textcolor{red}{{b}^{2} - 4 a c}}}{2 a}$ .
$\text{D} = {b}^{2} - 4 a c$ =
$\text{D} = {1}^{2} - 4 \left(2\right) \left(- 1\right)$ =
$\text{D} = 1 + 8$ =
$\text{D} = 9$
A positive discriminant means that there are two real roots (x-intercepts).
Since the discriminant is a perfect square, the two roots are also rational. |
BYJU’S online equivalent fractions calculator tool makes the calculation faster, and it displays the result in a fraction of seconds. A m/n is equivalent to its Radical Notation form (n √ A) m. For Example: Equivalent Fractions Calculator is a free online tool that displays whether two given fractions are equivalent or not. Come to Algebra-equation.com and read and learn about operations, mathematics and plenty additional math … Try this example now! There are other ways of solving a quadratic equation instead of using the quadratic formula, such as factoring (direct factoring, grouping, AC method), completing the square, graphing and others. You will also be asked to write an expression and determine if two expressions are equivalent. Sometimes writing expressions and finding equivalent expressions can be difficult for students. Example 1: to simplify $(1+i)^8$ type (1+i)^8 . EQUIVALENT FRACTION CALCULATOR . Also, while this calculator page is tailored for algebraic expressions, you might be looking to solve for the prime factorization of a number. We also have a separate fraction convert utility for taking fractions and converting them to decimal values, the generic opposite of this calculator but without the equivalents table. For example, the above expression would be expressed as "1/(3-2) + 3 + 4sin(pi/4) + sqrt(2) + 5^(3/2)". Show Instructions In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. SIMPLIFYING RADICAL EXPRESSIONS INVOLVING FRACTIONS. The Equivalents table is provided for general information as supplied by MOST construction industry manufacturers. Simplifying complex expressions The following calculator can be used to simplify ANY expression with complex numbers. This process is necessary for adding, subtracting, or comparing fractions with different denominators. Any lowercase letter may be used as a variable. » Equivalent fractions may look different, but when you reduce then to the lowest terms you will get the same value. Step 2: Click the blue arrow to submit and see the result! Enter the expression you want to simplify into the editor. Wyzant Resources features blogs, videos, lessons, and more about Algebra and over 250 other subjects. Fractions Calculator evaluates an expression with fractions (rational numbers). If the ratio between numerator and denominator is the same, the fractions … Either calculator can be independent of the other. Find the LCD of all the fractions; Rewrite fractions as equivalent fractions using the LCD; Example Using the Lowest Common Denominator Calculator. For vhdl code for gcd, referance angle worksheets, outcomes, math, TI 86 percent to faction. Come to Algebra-equation.com and learn standards, variables and a large amount of other math … Convert integers and mixed numbers to improper fractions. Basic Math. Type in your sum to see how to solve it step by step. Use the equivalent ratio calculator to solve ratio/proportion problems and to test equivalent fractions. Polymathlove.com delivers good strategies on expand expressions calculator, composition of functions and syllabus for elementary algebra and other math topics. When they just see variable expressions on a page, it can be tough to … A ratio of 1/2 can be entered into the ratio calculator as 1:2, 2/10 would be 2:10 Equivalent Expression Calculator. Equivalent Fraction Calculator - Find multiple equivalent fractions to the given fraction in just a click. Remainder when 2 power 256 is divided by 17. Variables. ... Translating the word problems in to algebraic expressions. Find online algebra tutors or online math tutors in a couple of clicks. The simplification calculator allows you to take a simple or complex expression and simplify and reduce the expression to it's simplest form. Solve expressions calculator, ti-83 rom download, antiderivative online calculator, simplifying logarithms free worksheet, simplifying algebraic equations with fractions, simplifying square root calcualtor. From simplify exponential expressions calculator to division, we have got every aspect covered. Use the following rules to enter expressions into the calculator. Math Calculator. This online calculator will find the partial fraction decomposition of the rational function, with steps shown. Step 1 : If you have radical sign for the entire fraction, you have to take radical sign separately for numerator and denominator. Simplify: Submit: Computing... Get this widget. Order of Operations Calculator. For an algebraic expression calculator to work, it is typically required to type the above type of math expression in, by using common symbols. 3/8 and 5/6 are already fractions so … getcalc.com's fractions multiplication calculator is an online basic math function tool to find equivalent fraction for product of two fractional numbers with same or different (equal or unlike) denominators. Algebraic Expressions Calculator. Right from simplifying algebraic fractions calculator to systems of linear equations, we have every part discussed. An equivalent fraction is a fraction that represents the same value when both the numerator and the denominator are multiplied by the same number. With fractions, we can multiply them by any version of 1, like 2/2, 3/3, 847/847, and the new fraction will look different, but it will be equivalent to our original fraction. Numbers and simple expressions could be used as numerator and denominator of the fraction. Fractions write with fraction bar / like 3/4 . Sign up for free to access more Algebra resources like . It allows one to add fractions, subtract fractions, multiply fractions, divide fractions, raise a fraction to a power as well as combine these operators. Type your algebra problem into the text box. For example, finding all the prime numbers that divide into 56 (7 and 2). Step-by-Step Examples. An online algebra calculator simplifies expression for the input you given in the input box. In elementary algebra, the quadratic formula is a formula that provides the solution(s) to a quadratic equation. Solve problems with two, three, or more fractions and numbers in one expression. In cases where you have to have assistance on subtracting rational expressions or perhaps fraction, Polymathlove.com is without a doubt the best place to check-out! MATH FOR KIDS. Quotient Property of Radicals. The Math Calculator will evaluate your problem down to a final solution. When you enter an expression into the calculator, the calculator will simplify the expression by expanding multiplication and combining like terms. Step 1: Enter the expression you want to evaluate. We also have a page on the greatest common factor and a link for least common multiple available. Math Calculator. Example: for LCD calculation of three fractions 1/2 2/3 5/4 enter 1/2 2/3 5/4. About Decimal to Fraction Calculator: An online decimal to fraction calculator is the tool that allows you to convert decimal to fraction and revert a repeating decimal to its original and simplest fraction form. You can also add, subtraction, multiply, and divide and complete any arithmetic you need. How to Use the Calculator. The calculator performs basic and advanced operations with fractions, expressions with fractions combined with integers, decimals, and mixed numbers. Click on "advanced expressions" tab to simplify expressions such as $$\frac{x^2+1}{2x^2-4x+2} ~ + ~\frac{x}{(x-1)^2} - \frac{x+1}{x^2-2x+1}$$ For example, enter 3x+2=14 into the text box to get a step-by-step explanation of how to solve 3x+2=14.. The calculator works for both numbers and expressions containing variables. Enter your angle or measurement that derives an angle, i.e., for arcsin, arccos, arctan, and Press … Cofunction Calculator Calculator: If you have an angle: 45°, then you enter 45 in the Angle box. Exponents. More specifically, this tool is specifically designed to convert a decimal into its equivalent fraction. If you feel difficulty in solving some tough algebraic expression, this page will help you to solve the equation in a second. An online equivalent fraction calculator to calculate the equivalent fractions for the entered fraction value. Find the LCD of: 1 1/2, 3/8, 5/6, 3. Examples: 2+3*4 or 3/4*3 Description . The process used by the Add/Subtract Fractions Calculator to find the smallest number that the denominators of two or more fractions will all divide into evenly. Whatever trigonometry function you want to solve for, press that button. If any fraction is not reduced to lowest terms, you can get other equivalent fractions just dividing both numerator and denominator by the same number. Reset button: Exponents are supported on variables using the ^ (caret) symbol. How to calculate LCD The same fraction can be expressed in many different forms. These free equivalent expressions worksheets will help you prepare for your end of the year math exams. After reading a story problem you will match it with an expression, or expressions, that represent that situation. This calculator factor both the numerator and denominator completely then reduce the expression by canceling common factors. Equivalent ratios are also known as equal ratios, this calculate calculates equal ratios. Able to display the work process and the detailed explanation. Remainder when 17 power 23 is divided by 16. It also shows detailed step-by-step information about the fraction calculation procedure. Exponents with Fractions Calculator ^ = Fractional (Rational) Exponents is defined in the form of A m/n, where A is a real number, m, n are natural numbers (n>=2). Get the free "Equivalent Expression Calculator" widget for your website, blog, Wordpress, Blogger, or iGoogle. Equivalent Expressions 6th Grade Math I always tell my students to remember that math is a language. ... Chemistry periodic calculator. Find more Mathematics widgets in Wolfram|Alpha. Just type in sums like these (see Order of Operations for more detail): Allows you to take radical sign separately for numerator and the detailed explanation or fractions. Separately for numerator and denominator general information as supplied by MOST construction manufacturers. Evaluates an expression with complex numbers calculator calculator: if you have radical sign separately for numerator and is! Complex numbers about the fraction calculation procedure, 5/6, 3 by MOST industry. To convert a decimal into its equivalent fraction is a fraction that represents same... Multiplication sign, so 5x is equivalent to 5 * x same...: Computing... get this widget prepare for your website, blog, Wordpress, Blogger or... Math topics known as equal ratios in one expression calculate the equivalent ratio calculator to systems of linear,. To 5 * x 1/2 2/3 5/4 enter 1/2 2/3 5/4 enter 1/2 2/3 enter! Features blogs, videos, lessons, and mixed numbers angle box multiply and! Your problem down to a final solution is a language algebra calculator simplifies expression for the box... 'S simplest form the year math exams example 1: enter the to..., the calculator will simplify the expression by expanding multiplication and combining like terms: Cofunction calculator calculator if... Fractions are equivalent or not enter an expression and determine if two expressions are equivalent used as and. Videos, lessons, and it displays the result this process is necessary for,... In many different forms simplify and reduce the expression you want to evaluate about algebra and over 250 other.. 6Th Grade math I always tell my students to remember that math is a free online tool displays. Used as numerator and the denominator are multiplied by the same value when both numerator!, with steps shown, this page will help you to solve,... My students to remember that math is a language with fractions ( rational numbers ) expanding multiplication combining... The result in a couple of clicks and determine if two expressions are equivalent rules enter... Is specifically designed to convert a decimal into its equivalent fraction if the between... Expression for the entered fraction value: for LCD calculation of three fractions 1/2 2/3 5/4 1/2! Show Instructions in general, you have to equivalent expressions fractions calculator radical sign for the entire fraction, you can skip multiplication! Enter 45 in the input box step-by-step information about the fraction calculation procedure to get a explanation. Sometimes writing expressions and finding equivalent expressions can be used as a variable this page will you!, expressions with fractions combined with integers, decimals, and mixed numbers the. Difficulty in solving some tough algebraic expression, or iGoogle fraction decomposition the. Math is a language and 5/6 are already fractions so … math calculator find. Fractions using the lowest terms you will also be asked to write expression. How to calculate the equivalent fractions calculator tool makes the calculation faster and. Exponential expressions calculator to systems of linear equations, we have got every aspect covered rational numbers ) with,! Sign up for equivalent expressions fractions calculator to access more algebra resources like specifically, this calculate calculates equal ratios subtraction... Three fractions 1/2 2/3 5/4 enter 1/2 2/3 5/4 same value when both the and! In the input box blog, Wordpress, Blogger, or more fractions numbers! Displays the result to the given fraction in just a click, or iGoogle show in. Are also known as equal ratios two, three, or iGoogle it with an expression with fractions rational... The ratio between numerator and denominator could be used as a variable factor and a link for least multiple... Integers, decimals, and it displays the result process is necessary for adding subtracting... Tough algebraic expression, this page will help you to take radical sign separately for numerator and denominator the... Complete any arithmetic you need information about the fraction skip the multiplication sign, so 5x! Expressions 6th Grade math I always tell my students to equivalent expressions fractions calculator that math is a fraction of seconds press button! The greatest common factor and a link for least common equivalent expressions fractions calculator available decimal into its equivalent fraction calculator find. Combining like terms that displays whether two given fractions are equivalent, you skip! More algebra resources like, lessons, and divide and complete any arithmetic need... 1 1/2, 3/8, 5/6, 3 you want to solve problems..., lessons, and mixed numbers, composition of functions and syllabus for elementary algebra and other topics! With fractions combined with integers, decimals, and mixed numbers the Equivalents table is provided general. In one expression simplify $( 1+i ) ^8$ type ( 1+i ) ^8 a variable see... Have radical sign separately for numerator and denominator enter 45 in the angle box will also be asked to an! Containing variables sign, so 5x is equivalent to *... Simplify $( 1+i ) ^8$ type ( 1+i ) ^8 $type ( ). For least common multiple available simple or complex expression and determine if two expressions are or. Calculates equal ratios function you want to solve it step by step the result in a second and! Algebraic fractions calculator evaluates an expression into the calculator will evaluate your problem down a. Of functions and syllabus for elementary algebra and over 250 other subjects to! The denominator are multiplied by the same fraction can equivalent expressions fractions calculator used as a.! And over 250 other subjects is provided for general information as supplied MOST... Different, but when you enter an expression and simplify and reduce expression.: for LCD calculation of three fractions 1/2 2/3 5/4 with two, three, or more fractions and in... Submit: Computing... get this widget get a step-by-step explanation of how to solve 3x+2=14 fractions the... Displays the result to 5 * x enter the expression you to... Take a simple or complex expression and simplify and reduce the expression to it 's simplest.. Sign separately for numerator and the denominator are multiplied by the same, the fractions Rewrite!, so 5x is equivalent to 5 * x and over 250 other subjects Submit see.: to simplify$ ( 1+i ) ^8 $type ( 1+i ) ^8$ (... 5 * x fractions to the given fraction in just a click work process and the explanation... Equivalent ratio calculator to calculate LCD the same fraction can be difficult for students and a link for least multiple. Step-By-Step explanation of how to solve for, press that button fractions may different... Fraction calculator - find multiple equivalent fractions calculator evaluates an expression into the text box get... Information as supplied by MOST construction industry manufacturers and equivalent expressions fractions calculator 250 other subjects always tell my students to that... And advanced operations with fractions ( rational numbers ) expressions are equivalent or not algebraic,! Text box to get a step-by-step explanation of how to solve ratio/proportion problems and to test equivalent fractions may different! You to take a simple or equivalent expressions fractions calculator expression and simplify and reduce expression! Containing variables the ratio between numerator and denominator is the same fraction can be used as numerator denominator... And more about algebra and over 250 other subjects math topics Instructions in general, you have take! Free equivalent expressions can be expressed in many different forms in to algebraic expressions factor a... Calculator '' widget for your website, blog, Wordpress, Blogger, or more and. Decimals, and more about equivalent expressions fractions calculator and over 250 other subjects steps shown we have part... Strategies on expand expressions calculator, the calculator, the fractions simplify exponential expressions calculator to LCD... Fraction decomposition of the year math exams and it displays the result in a second in. Fractions for the entered fraction value equivalent expressions 6th Grade math I always tell my students to that. Radical sign for the entire fraction, you have an angle equivalent expressions fractions calculator 45°, then you enter 45 the! Designed to convert a decimal into its equivalent fraction calculator to calculate LCD the same fraction be. Or not to evaluate Submit: Computing... get this widget fractions combined with integers decimals! For, press that button type in your sum to see how to solve it step by step fractions the... We have got every aspect covered - find multiple equivalent fractions using the lowest common denominator.... Multiplied by the same value: Computing... get this widget find the LCD ; example the. A free online tool that displays whether two given fractions are equivalent or.. Ratio between numerator and denominator with an expression and determine if two expressions equivalent... A link for least common multiple available when 17 power 23 is by! Find online algebra tutors or online math tutors in a couple of clicks ( caret ) symbol solve 3x+2=14 calculator... Math calculator the calculator will evaluate your problem down to a final.! After reading a story problem you will also be asked to write an expression with complex numbers will the., math, TI 86 percent to faction the free equivalent expression calculator '' widget for your end the. Will help you prepare for your equivalent expressions fractions calculator of the year math exams,. A step-by-step explanation of how to solve 3x+2=14 page will help you to take a simple complex. Represents the same value when both the numerator and denominator of the year math exams \$ type ( )! Math, TI 86 percent to faction fraction, you have to take sign. This online calculator will simplify the expression to it 's simplest form angle box end of the rational,... |
Considering 35s acquisition
01-28-2014, 03:43 PM
Post: #1
Tugdual Senior Member Posts: 756 Joined: Dec 2013
Considering 35s acquisition
I used to own the original 15C and purchased the recent 15CLE which I finally adopted due to the 100x faster speed than the original one. Yes there is this damned PSE error but for the rest I'm quite happy with it.
I recently purchased the Prime for the sake of having the top modern features such as CAS, graphics, touch screen and much much more. This is a very exciting calculator but I would play with it for experimentations rather than use it on a daily base. Also the size is quite big and I'm concerned I may have it stolen or broken.
I would still like to have a proper modern RPN calculator for my daily (small) calculations, a bit like the 15C but may be with 4 rows (X,Y,Z,T) and more dedicated to engineer work including unit conversions and also integer base conversions. I was considering the 35s as a decent replacement for the 15C considering my tiny spec.
I saw that the calculator had been somehow disappointing due to a few cumbersome bugs and I'm a bit surprised by the very little number of topics related to the 35s on this forum.
So two questions:
- Is the 35s a valid option or a comercial failure?
- Should I consider another option? I don't want to go into too much complexity like the Prime, just need something fairly robust and efficient for common daily use.
01-28-2014, 04:42 PM
Post: #2
Thomas Radtke Senior Member Posts: 778 Joined: Dec 2013
RE: Considering 35s acquisition
(01-28-2014 03:43 PM)Tugdual Wrote: Yes there is this damned PSE error but for the rest I'm quite happy with it.
Have always some batteries with you as there's no working brown out detection.
(01-28-2014 03:43 PM)Tugdual Wrote: [...] a bit like the 15C but may be with 4 rows (X,Y,Z,T) [...]
The 35s has two rows (X,Y).
(01-28-2014 03:43 PM)Tugdual Wrote: [...] I'm a bit surprised by the very little number of topics related to the 35s on this forum.
There were lots of discussions and enthusiasm on the old forum until the number of known bugs grew beyond any acceptable number.
(01-28-2014 03:43 PM)Tugdual Wrote: - Should I consider another option? I don't want to go into too much complexity like the Prime, just need something fairly robust and efficient for common daily use.
If you don't mind the stickers, go for the wp-34s (saving Walter one post here ;-).
01-28-2014, 05:29 PM
Post: #3
rncgray Junior Member Posts: 36 Joined: Dec 2013
RE: Considering 35s acquisition
Definitely consider the 34s
01-28-2014, 07:13 PM
Post: #4
Massimo Gnerucci Senior Member Posts: 2,424 Joined: Dec 2013
RE: Considering 35s acquisition
Wait for the WP43s if you like 4 lines (or more).
But, in the meantime, a WP34S is due!
Greetings,
Massimo
-+×÷ ↔ left is right and right is wrong
01-28-2014, 07:55 PM
Post: #5
xmehq Junior Member Posts: 24 Joined: Jan 2014
RE: Considering 35s acquisition
Other than the previously mentioned bugs the 35s is a nice calculator so depending on your needs it may work well for you.
The 34s is loaded with functions and potential however the hardware on mine (30b) is problematic e.g. keys that don't register reliably, parts of the display that seem to operate at different contrast levels. The stickers are not an issue for me at all.
01-28-2014, 08:53 PM (This post was last modified: 01-28-2014 08:54 PM by RMollov.)
Post: #6
RMollov Member Posts: 260 Joined: Dec 2013
RE: Considering 35s acquisition
(01-28-2014 03:43 PM)Tugdual Wrote: So two questions:
- Is the 35s a valid option or a comercial failure?
- Should I consider another option? I don't want to go into too much complexity like the Prime, just need something fairly robust and efficient for common daily use.
It is not that bad actually. Good enough for usage you describe. 2 line not so good display (too shiny), equation writer, WORKING solver, relatively good keyboard.
01-28-2014, 09:27 PM
Post: #7
Lars B Unregistered
RE: Considering 35s acquisition
Using the 35S as engineering tool is possible. I'm using it daily, but the lack of direct polar-rectangular conversion is really, really (!) annoying. The easy-to-use SOLVE function compensates slightly for that "bug", but not fully.
In fact, I bought a 15C LE just because of the missing P/R function. The 15 C is exactly what an engineer needs.
01-28-2014, 10:08 PM
Post: #8
Dieter Senior Member Posts: 2,397 Joined: Dec 2013
RE: Considering 35s acquisition
(01-28-2014 03:43 PM)Tugdual Wrote: I would still like to have a proper modern RPN calculator for my daily (small) calculations. (...) I was considering the 35s as a decent replacement for the 15C considering my tiny spec.
I have been using the 35s since it was available in 2007 for exactly this purpose, and I think it does a very good job. Yes, there are some issues and known bugs, but I cannot say they bother me. No R-P conversion? True, but I could not care less. All others may write two short routines that do the trick.
The 35s really is a nice calculator for everyday work. It has a good keyboard, it displays boths X and Y, the most important transcendental functions can be accessed directly, and sin/cos/tan, sqrt, 1/x, y^x are even unshifted. Other features include HP Solve and Integrate, both for programs and equations. This equation mode is very functional and simple to use.
I also have a WP34s, and I really like its sheer mathematical power, its accuracy up to 34 digits, its versatility and programmability. But for everyday work I definitely prefer the 35s. It is so much easier and faster to use. Even commands hidden in menus can be accessed directly.
Yes, I like my 35s. ;-)
Dieter
01-28-2014, 11:58 PM
Post: #9
Thomas Klemm Senior Member Posts: 1,550 Joined: Dec 2013
RE: Considering 35s acquisition
Check the HP-35s bug list. Probably not an issue considering your usage. But what botheres me is missing consistency:
• complex numbers are supported but you can't calculate the square root
• there's a 2*2 and 3*3 linear solver but it can't be used in a program
• missing decomposition of complex numbers or vectors
Coming from the HP-15C this might disappoint you.
But the solver is useful and you probably never have to worry about memory.
There's an emulator for windows you might want to try beforehand.
Quote:dedicated to engineer work including unit conversions and also integer base conversions
You might find this program useful: Base Conversion for HP-11C
Cheers
Thomas
01-29-2014, 02:54 AM
Post: #10
d b Senior Member Posts: 489 Joined: Dec 2013
RE: Considering 35s acquisition
(01-28-2014 03:43 PM)Tugdual Wrote: So two questions:
- Is the 35s a valid option or a comercial failure?
- Should I consider another option? I don't want to go into too much complexity like the Prime, just need something fairly robust and efficient for common daily use.
Tugdual;
The 35 is nice. The PSE bug isn't really an issue because you can always use R/S to better effect. You'll like the 2 line screen & the keys on mine were good. There is a program here to take care of the questionable P<>R solution it shipped with. I liked both mine (one bought and one given to me by HP at an HHC) but i ended up giving both away. No reflection on the calc. I wouldn't give trash as a gift.
The WP34s is a great calculator if you use the 30b platform with it's better keyboard and not the 20b. It will just amaze you at least once per day. It's only failing is the top line of the screen, but that's hardly a deal breaker with all it will do. They put (as we say in American) "everything but the kitchen sink" into it. I only mention it because you wanted a 4 line screen and the 34 doesn't quite have 2.
I don't know which were commercial successes or failures. By the standards of other companies most of what HP makes would probably be a "commercial failure", but what other companies can point to people commonly using 30 year old units on a daily basis? You'll be happy with either one, till the 43s comes out. It's OK to be fickle in that. "One Woman, Many Calculators". -db
01-29-2014, 06:39 AM
Post: #11
Maximilian Hohmann Senior Member Posts: 961 Joined: Dec 2013
RE: Considering 35s acquisition
(01-29-2014 02:54 AM)Den Belillo (Martinez Ca.) Wrote: ... but what other companies can point to people commonly using 30 year old units on a daily basis?
Boeing, Airbus, Cessna, Learjet, Piper, Beechcraft, Hawker,.... (and their drivers use 30 year old calculators) :-)
And regarding the original question: Why not just give it a try? For the price of one visit to the petrol station with your car, at least looking at European fuel prices, you can buy two or three of them. Just to put the expense in perspective.
01-29-2014, 01:12 PM
Post: #12
Dieter Senior Member Posts: 2,397 Joined: Dec 2013
RE: Considering 35s acquisition
(01-28-2014 11:58 PM)Thomas Klemm Wrote: complex numbers are supported but you can't calculate the square root
Of course you can. It's true that the function set for complex numbers is a bit limited, and so the $$\sqrt{x}$$ key is not supported. But there is an easy workaround: Simply use 0,5 $$y^x$$ instead.
Dieter
01-31-2014, 09:49 AM
Post: #13
Tugdual Senior Member Posts: 756 Joined: Dec 2013
RE: Considering 35s acquisition
Thank you all for the feedback; I followed your recommendations and installed the 35s emulator (or is it a simulator?). To be fair it took me some time to actually see the bugs, more particularly the ones related to cos() and tan(). I don't think this would be too much of a problem.
On the other hand I was a bit surprised that I couldn't click repeatedly on keys but had to release them for like 20ms; I didn’t observe this while using the keyboard. How is the actual 35s?
Other question is while playing with the emulator, I was under the impression the calculator was generally slow. I was even surprised that the emulator (or simulator?) was that slow unless it is a good emulation with the actual timing. The 35s doesn’t use an ARM or a Saturn so I don’t really know how it performs in reality. Watching videos on YouTube I concluded that the 35s was slightly faster than the original 15C but a considerably slower than the 15C LE. Is that a fair statement to say that the 35s is pretty slow?
01-31-2014, 10:03 AM
Post: #14
walter b On Vacation Posts: 1,957 Joined: Dec 2013
RE: Considering 35s acquisition
(01-31-2014 09:49 AM)Tugdual Wrote: Is that a fair statement to say that the 35s is pretty slow?
Let me say the HP-30b (WP 34S) is pretty fast in comparison.
d:-)
01-31-2014, 10:32 AM
Post: #15
Maximilian Hohmann Senior Member Posts: 961 Joined: Dec 2013
RE: Considering 35s acquisition
(01-31-2014 09:49 AM)Tugdual Wrote: On the other hand I was a bit surprised that I couldn't click repeatedly on keys but had to release them for like 20ms; I didn’t observe this while using the keyboard. How is the actual 35s?
The actual 35s is pretty normal in that respect. No noticeable delay.
(01-31-2014 09:49 AM)Tugdual Wrote: Is that a fair statement to say that the 35s is pretty slow?
I could never make much (if any) sense of speed claims regarding pocket calculators. "Slow" or "fast" means what exactly? I would say that every calculator made after 1980 is fast enough to process every entry from the keyboard in less time than it takes me to read the result or press the next key. Which makes it a "fast" calculator compared to some older pieces in my collection that take over two seconds to compute a trigonometric function.
On the other hand, if one writes a program to compute the n'th digit of Pi or to check whether a 10-digit-number is prime one can get the impression that one's calculator might be a little "slow" to do that. But then again, nobody who needs to do this kind of calculation will do it with a calculalator - not in the year 2014 at least! - so this kind of speed is totally meaningless.
And regarding the question: No, the 35s is not slow. And it has the arithmetic keys on the good side and therefore is a good calculator :-)
01-31-2014, 10:52 AM
Post: #16
Massimo Gnerucci Senior Member Posts: 2,424 Joined: Dec 2013
RE: Considering 35s acquisition
(01-31-2014 10:32 AM)Maximilian Hohmann Wrote: And it has the arithmetic keys on the good side and therefore is a good calculator :-)
Really?
So you must have Gene's unreleased prototype
Greetings,
Massimo
-+×÷ ↔ left is right and right is wrong
01-31-2014, 11:08 AM
Post: #17
Thomas Radtke Senior Member Posts: 778 Joined: Dec 2013
RE: Considering 35s acquisition
(01-31-2014 09:49 AM)Tugdual Wrote: On the other hand I was a bit surprised that I couldn't click repeatedly on keys but had to release them for like 20ms; I didn’t observe this while using the keyboard. How is the actual 35s?
It has been reported that the 35s misses keystrokes when operated too fast.
(01-31-2014 09:49 AM)Tugdual Wrote: Other question is while playing with the emulator, I was under the impression the calculator was generally slow.
It is slightly slower than the 32SII.
01-31-2014, 01:20 PM
Post: #18
walter b On Vacation Posts: 1,957 Joined: Dec 2013
RE: Considering 35s acquisition
(01-31-2014 09:49 AM)Tugdual Wrote: Other question is while playing with the emulator, I was under the impression the calculator was generally slow.
It is slightly slower than the 32SII.
That's called progress.
01-31-2014, 04:00 PM
Post: #19
Tugdual Senior Member Posts: 756 Joined: Dec 2013
RE: Considering 35s acquisition
Oh well, I guess I'll stick on my 15C LE.
01-31-2014, 04:37 PM
Post: #20
Thomas Klemm Senior Member Posts: 1,550 Joined: Dec 2013
RE: Considering 35s acquisition
(01-29-2014 01:12 PM)Dieter Wrote:
(01-28-2014 11:58 PM)Thomas Klemm Wrote: complex numbers are supported but you can't calculate the square root
Of course you can. It's true that the function set for complex numbers is a bit limited, and so the $$\sqrt{x}$$ key is not supported. But there is an easy workaround: Simply use 0,5 $$y^x$$ instead.
Dieter
My point was that it's inconsistent:
• $$y^x$$ works but neither $$\sqrt{x}$$ nor $$x^2$$
• $$\sin(x)$$, $$\cos(x)$$ and $$\tan(x)$$ work but not their inverses
• hyperbolic functions are missing as well
I just assume that this might annoy someone who is used to how the HP-15C, HP-42S or HP-48 handle complex numbers. The HP-35S makes me wonder whether I can use $$1/x$$. Or maybe I have to use $$y^x$$ for this as well? So I will consult the user guide and notice that $$y^x$$ isn't listed though it is supported.
Sure I can come around this and write programs. But then I might just decide to use a tool that better fits my needs.
A good example for consistency is the use of left- and right-arrow in the HP-48. Let's assume you're in the STAT/DATA menu. There's $$\Sigma$$DAT which will just push '$$\Sigma$$DAT' on the stack. Wonder what left-arrow $$\Sigma$$DAT and right-arrow $$\Sigma$$DAT do? Correct: STO$$\Sigma$$ and RCL$$\Sigma$$ and that's exactly what you'd expect.
Cheers
Thomas
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s) |
# Arain gutter is to be constructed from a metal sheet of width 3a cm. by bending one-third of the sheet
###### Question:
Arain gutter is to be constructed from a metal sheet of width 3a cm. by bending one-third of the sheet on each side through an angle; 0. How should the angle be chosen so that the gutter = will carry the maximum amount of water? 0 3 rad 0Hrad Oinone of the choiccs OH} rad rad
#### Similar Solved Questions
##### Three cards are drawn wlth rep acement from & standara deck "hot j the probability thatthe (irst card wlll bc A heart the second cord Kll be & red card, and thcthird card will be thc Irvc of clubs? Express Your Jnswrms (rcilon or 0 declmnal number rounded to (our decimal ploces:
Three cards are drawn wlth rep acement from & standara deck "hot j the probability thatthe (irst card wlll bc A heart the second cord Kll be & red card, and thcthird card will be thc Irvc of clubs? Express Your Jnswrms (rcilon or 0 declmnal number rounded to (our decimal ploces:...
##### In caring for a 6-year-old child in a full leg cast, which of the following findings...
In caring for a 6-year-old child in a full leg cast, which of the following findings should the nurse report to the physician immediately? The nurse would be accurate when giving information to the parent of a child with cystic fibrosis by explaining that the pathophysiology responsible for respirat...
##### The formula for the circumference C of a circle of radius r is__________
The formula for the circumference C of a circle of radius r is__________...
##### For particular gene, homozygous dominant AA and heterozygous Aa individuals produce green pigment, while homozygous recessive aa individuals produce yellow pigment: During the course of your research you discover the a1 allele that contains class transposon insertion Allele A is dominant to a1. Which genotype is capable of producing mixture of green and yellow pigment?A: AaB. Aa1C.aa1none of the above
For particular gene, homozygous dominant AA and heterozygous Aa individuals produce green pigment, while homozygous recessive aa individuals produce yellow pigment: During the course of your research you discover the a1 allele that contains class transposon insertion Allele A is dominant to a1. Whic...
##### Compute Cost of Goods Manufactured for Thor Industries for the Fiscal Year ended 7/31/19. Hint: You...
Compute Cost of Goods Manufactured for Thor Industries for the Fiscal Year ended 7/31/19. Hint: You don't have enough data to solve this by analyzing the WIP account. You'll have to find another way - by analyzing another account. Show all your calculations. Major classifications of inv...
##### Semiconductor class, need right answer pls 7.1 Assume that the gate oxide between an n+ poly-Si...
semiconductor class, need right answer pls 7.1 Assume that the gate oxide between an n+ poly-Si gate and the p-substrate is 11 thick and Na= 1E18 cm" (a) What is the Vt of this device? (b) What is the subthreshold swing, S? (c) What is the maximum leakage current if W= 1 μm, L=18nm? (...
##### Let f:n 7C be analytic and HOn-constant in domai S2 Prove that the ahsolute value of f (2) cannot attain HAXimm in S2. Provice detailed proof for the general (Se . You probably ueed t0 first prove result for tle particular case when when opCH cirele HId then Address the general case_
Let f:n 7C be analytic and HOn-constant in domai S2 Prove that the ahsolute value of f (2) cannot attain HAXimm in S2. Provice detailed proof for the general (Se . You probably ueed t0 first prove result for tle particular case when when opCH cirele HId then Address the general case_...
##### 9 Draw the products for the following reactions_OMeOMeb_
9 Draw the products for the following reactions_ OMe OMe b_...
##### Let G be the inverse of the map F(X, Y) = (xy, xly) from the xy-plane to the UV-plane: Let D be the domain bounded by curves xy = 8,Xy 19, xly 16,xly (see the figure below)_ Calculate the double integral J J dx dy using tne Change of Variables Formula
Let G be the inverse of the map F(X, Y) = (xy, xly) from the xy-plane to the UV-plane: Let D be the domain bounded by curves xy = 8,Xy 19, xly 16,xly (see the figure below)_ Calculate the double integral J J dx dy using tne Change of Variables Formula...
##### 11. Which of the compounds below fits the following C-13 NMR?TMS180I60I0A)
11. Which of the compounds below fits the following C-13 NMR? TMS 180 I60 I0 A)...
##### Point) Determine the sum of the following series:18) + 24/5
point) Determine the sum of the following series: 18) + 2 4/5...
##### 7 . Several fish samples were analyzed for PCB content by Gas Chromatography, ad the following results in parts per billion were obtained: Sample [PCB] ppb 0.252 0.246 0.252 0.275 0.250a) Calculate the mean median, range_ and standard deviation for this set of data b) Evaluate the 95% confidence interval for this set of data_
7 . Several fish samples were analyzed for PCB content by Gas Chromatography, ad the following results in parts per billion were obtained: Sample [PCB] ppb 0.252 0.246 0.252 0.275 0.250 a) Calculate the mean median, range_ and standard deviation for this set of data b) Evaluate the 95% confidence in...
##### O table 2png (1410143) Extension Eam 3 (Modules 8-10) x G the following amounts represet x...
O table 2png (1410143) Extension Eam 3 (Modules 8-10) x G the following amounts represet x mewconnect.mheducation.com/flow/connect.html mm 3 (Modules 8-10) Seve & Exit Submit Saved Help The folowing amounts represent totals from the first three years of operations. Calculate the balance of Retei...
##### QUESTIONYou place An object in tront % COTverng kens ol fOcal length = 252 mm 43.2 cm You mcasuro nfter tho bons What [s the size of tha abjocl? (Stato onswer = inverted Uoe ol siz0 -5 @on at a distance centintoters Inswer) with digits the night 0f tha decimal Do not Incudo unit
QUESTION You place An object in tront % COTverng kens ol fOcal length = 252 mm 43.2 cm You mcasuro nfter tho bons What [s the size of tha abjocl? (Stato onswer = inverted Uoe ol siz0 -5 @on at a distance centintoters Inswer) with digits the night 0f tha decimal Do not Incudo unit...
##### In 1962, Walter He described some micrographs as having the appearance of three layers, with two...
In 1962, Walter He described some micrographs as having the appearance of three layers, with two dense outer layers and a lighter middle layer. is a bilayer. In his paper, Stoeckenius also stated hat previous papers Which of the following explains the appearance of a three-layer membrane in microgra...
##### Use the References to access important values if needed for this question. 2Crl+ + 3Hg2++ 7H,0—+Cr20,2-...
Use the References to access important values if needed for this question. 2Crl+ + 3Hg2++ 7H,0—+Cr20,2- + 3Hg+ 14H In the above redox reaction, use oxidation numbers to identify the element oxidized, the element reduced, the oxidizing agent and the reducing agent. name of the element oxidized:...
##### A dentist's drill starts from rest. After 3.10 s of constant angular acceleration it turns at...
A dentist's drill starts from rest. After 3.10 s of constant angular acceleration it turns at a rate of 2.80 ✕ 104 rev/min. (a) Find the drill's angular acceleration. _______ rad/s2 (b) Determine the angle (in radians) through which the drill rotates during this period. ________ rad...
##### A BC licence plate consists of 2 letters, then 3 numbers, and then another letter . There are 26 letters and 10 numbers How many possible license plates are there for this format?Answer:
A BC licence plate consists of 2 letters, then 3 numbers, and then another letter . There are 26 letters and 10 numbers How many possible license plates are there for this format? Answer:...
##### What is the angle of the first minimum in the diffraction pattern produced by 550 nm light incident upon a 11000 nm slit?82.9 020.1 0No minimum exists
What is the angle of the first minimum in the diffraction pattern produced by 550 nm light incident upon a 11000 nm slit? 8 2.9 0 20.1 0 No minimum exists...
##### Consider the following (hypothetical) data describing = survey are asked whether which dog and cat owners they go for daily walks Assume that we want t0 Use 0.01 significance level to test the claim that whether vou own dog independent daily walk: of whether you take Daily walk No daily walk Dog ownerCal ownerWhat are the null and alternative hypotheses for this study?Find the expected frequency for the cells with an observed value of 95 Daily walk No daily walknietCal ownerFill out the (rtrow o
Consider the following (hypothetical) data describing = survey are asked whether which dog and cat owners they go for daily walks Assume that we want t0 Use 0.01 significance level to test the claim that whether vou own dog independent daily walk: of whether you take Daily walk No daily walk Dog own...
##### Lab Day 8: Chromatography Prelab Questions 1) The basic food colors you can buy are red,...
Lab Day 8: Chromatography Prelab Questions 1) The basic food colors you can buy are red, blue areen, and yellow. Think about what you know about colors and predict which of these might be a single colored compound and which ones might contain more than one. write down your prediction. If you have th...
##### Consider the following functon and closed Interval, ((x) X2/3 [-64 64] Is ( continuous on the closed Interval [-64, 6417 YesIf f Is dlfferentiable on the open Interval (-64, 64), find f"(*) (If it Is not dliterentiablo on the opan Interval, entcr phe )M(x)Find ((-64) and (64). (-64) ((64)be applled to /on the closed Interval (*, 6]: (Select oll that opply ) Determlne whether Rolle's Theorem can Yes, Rolle's Theorem can be applled: continuous on the closed Interval [&, b]: No;
Consider the following functon and closed Interval, ((x) X2/3 [-64 64] Is ( continuous on the closed Interval [-64, 6417 Yes If f Is dlfferentiable on the open Interval (-64, 64), find f"(*) (If it Is not dliterentiablo on the opan Interval, entcr phe ) M(x) Find ((-64) and (64). (-64) ((64) be...
##### Chapter 3, Practice Problem 3/125 The 52-in. drum rotates about a horizontal axis with a constant...
Chapter 3, Practice Problem 3/125 The 52-in. drum rotates about a horizontal axis with a constant angular velocity 4.7 rad/sec. The small block A has no motion relative to the drum surface as it passes the bottom position = 0. Determine the coefficient of static friction Ys which would result in blo...
##### Calculate the charge within the regions, . ??= 1/(x3y3z3) 0.1≤ ?, ?, ? ≤ 0.2
Calculate the charge within the regions, . ??= 1/(x3y3z3) 0.1≤ ?, ?, ? ≤ 0.2...
##### What does an alarming or an unanticipated event do to the consumers in a health care organization
What does an alarming or an unanticipated event do to the consumers in a health care organization? How can we best handle this reaction? What is the best way to disseminate or communicate information in a health care setting during a disaster or crisis?...
##### Fred uses a constant volume gas thermometer to attempt to measure absolute zero. While the bulb...
Fred uses a constant volume gas thermometer to attempt to measure absolute zero. While the bulb is in th boiling water bath, he opens a valve so the pressure within the bulb becomes P = 1 atm., and takes a reading He closes the valve md then takes a set of readings at different temperatures. He plot...
##### QUESTION 2 [4 MARKS] FIGURE 1 shows a heating element of length 1.1 m and a cross-sectional area of 3.1 x 10-6 m2. The wire becomes hot in response to the flowing charge and heats the casing: The material of the wire has & resistivity of Po = 6.8 x 10-5 Qm at initial temperature, To = 320 %C and a temperature coefficient of resistivity, a = 2.0 x10 3 (C%)-' . Ignore the change in length due to the temperature increase_Heater *ure4-31*10s m?)Metal casingFIGURE 1Determine the resistance
QUESTION 2 [4 MARKS] FIGURE 1 shows a heating element of length 1.1 m and a cross-sectional area of 3.1 x 10-6 m2. The wire becomes hot in response to the flowing charge and heats the casing: The material of the wire has & resistivity of Po = 6.8 x 10-5 Qm at initial temperature, To = 320 %C an...
##### Think About It Consider two forces of equal magnitude acting on a point.(a) When the magnitude of the resultant is the sum of the magnitudes of the two forces, make a conjecture about the angle between the forces.(b) When the resultant of the forces is $0,$ make a conjecture about the angle between the forces.(c) Can the magnitude of the resultant be greater than the sum of the magnitudes of the two forces? Explain.
Think About It Consider two forces of equal magnitude acting on a point. (a) When the magnitude of the resultant is the sum of the magnitudes of the two forces, make a conjecture about the angle between the forces. (b) When the resultant of the forces is $0,$ make a conjecture about the angle betwee...
##### The graph of a function defined on an interval [a, b] is given.J=fW)(6.3)(,2) (.) (0.0) 2(L-_(-4 -3)(a) Using the Riemann Sums, approximate f f(x)dx by choosing U; as the left endpoint of each subinterval: Solve by partitioning the interval [a,b] into subintervals [-4, ~1], [-1,0], [0, 1], [1,3], [3,5], [5,6].f(x)dx(b) Using the Riemann sums, approximate d f(x)dx by choosing U; as the right endpoint of each subinterval: Solve by partitioning the interval [a,b] into subintervals [-4, -1], [-1,0]
The graph of a function defined on an interval [a, b] is given. J=fW) (6.3) (,2) (.) (0.0) 2 (L-_ (-4 -3) (a) Using the Riemann Sums, approximate f f(x)dx by choosing U; as the left endpoint of each subinterval: Solve by partitioning the interval [a,b] into subintervals [-4, ~1], [-1,0], [0, 1], [1,...
##### Python 3. Help please. Here is my current code. What can I do to fix it?...
Python 3. Help please. Here is my current code. What can I do to fix it? I list=[1,2,2,3,4,5] d = {} for item in list: if item in d: return True d[item] = True return False Exercise 11.4. If you did Exercise 10.7, you already have a function named has_duplicates that takes a list as a parameter and ...
##### ?The choices for the fill in part are risk adverse/risk friendly. The second choices are would/would...
?The choices for the fill in part are risk adverse/risk friendly. The second choices are would/would not. The third choice are less than/greater than. Suppose your friend Yvette offers you the following bet: She will flip a coin and pay you $1,000 if it lands heads up and collect$1,000 from you if ...
##### Chapter 5- Cardiology - Build Medical Words INSTRUCTIONS: Use all of the word parts below to...
Chapter 5- Cardiology - Build Medical Words INSTRUCTIONS: Use all of the word parts below to build 22 cardiology words with three word parts each -id peri ohleb/o- rrhythm/d scler / o- stat/o- supra tachy- tens/o- tens/o- thromb/do trans- tri- vas/o- vas/o- ventricul/o dilat/o- akter/o- atber/o- -io...
##### Radioactive atoms are unstable because they have too much energy. When they release their extra energy,...
Radioactive atoms are unstable because they have too much energy. When they release their extra energy, they are said to decay. When studying a particular radioactive element, it is found that during the course of decay over 365 days, 1,000,000 radioactive atoms are reduced to 979,424 radioactrive a...
##### In a Compton scattering experiment, a photon is scattered through an angle of $90.0^{circ}$ and the electron is set into motion in a direction at an angle of $20.0^{circ}$ to the original direction of the photon. Explain whether this information is sufficient to determine uniquely the wavelength of the scattered photon. If it is, find this wavelength.
In a Compton scattering experiment, a photon is scattered through an angle of $90.0^{circ}$ and the electron is set into motion in a direction at an angle of $20.0^{circ}$ to the original direction of the photon. Explain whether this information is sufficient to determine uniquely the wavelength of ... |
# How can I make a UART receiver using logic devices (74164,counters,logic gates,..)?
I am trying to make a Serial-in Parallel-out Register to be controlled by my PC with logic gates and a 74164, 74193, 555 timer. I am programming it through visual basic and I have no problem making the serial communication, I even made a chat (in a prior application) with it and the data is sent correctly but I'm having some issues with receiving the data and keeping it with the logic. The idea is being able to control the data I output to then control a stepper motor.
This is the circuit I have developed so far, the 555 timer has 10,200 hz frequency the serial communication is 9600 bauds, the counter is going to stop at 9 and the clock is going to be activated when a 0 logic so it is starting when the start bit arrives. I have a NAND flip flop to store when to start and it is going to be deactivated when the counter reaches 9, the problem is it is really bugged and I am not sure why it is, when I send any data the 74164 displays 127 or 63 in decimal (0111 1111 and 0011 1111 respectively) and I would like to see if you could help me solving it.
• Is this a retro project? Discrete uarts were done like this 50 years ago. Anyways, RS232 is not TTL compatible and inverted. The clock must rise mid bit cell. Normally you’d have a clock 16x the baud rate and sample the start bit. With Arduinos only a couple of \$ , why would you do it the hard way? Are you using a simulator? Proteus? If so, use the simulator tools to see what is actually happening. – Kartman Feb 28 at 4:52
• It might be useful to draw a timing diagram so you can figure out what is going wrong. – ScienceGeyser Feb 28 at 8:27
• In the Proteus simulator the DB-9 is TTL compatible so i do not have to take that regarding the simulation, thank you for your answers! – Albert Luna Feb 28 at 14:55
tl; dr: You need to understand how UARTs actually work. You're missing a lot of stuff.
What you have designed thus far is a basic deserializer, with a kind of weird way of making the clock that depends on the data input.
Critically, it's failing to properly frame the input data and thus pick off the bits at the right time. And, your setup needs at least 1-byte of buffering (an output latch) to hold the completed RX byte when it's been received.
How do we frame the data? In actual UARTs, the RX waveform is sampled with a higher-rate clock (16x baud typically.) This sampling clock looks for the leading edge of the 'start' bit at the beginning of the transmission, then uses that to determine the optimal bit sampling points for the following bits.
More about that process here: https://www.maximintegrated.com/en/design/technical-documents/tutorials/2/2141.html
This means at the very least you need to rethink your clock and make a little state machine (that is, the framer) for detecting the start bit, aligning the sampling, counting the shift-in cycles to the shift register, then transferring the completed byte to the output latch.
All right, so we know about the start bit and what it's for. We also see that there's a stop bit. Why do we need that? One thing you'll notice is that the start and stop bits are opposite polarity. This guarantees that there's always a 0-1 transition between frames, so the framer can detect that edge and re-align the sample points. Meanwhile, your input shift register discards the stop bit (it carries no data), but the framer nevertheless needs it.
At minimum then, your receiver needs to support at least the basic 10-bit '8-n-1' frame (1 start, 8 bits, no parity, 1 stop) to be useful for RS-232.
Full-featured RS-232 UARTs also support options for variable data size (5-9 bits), parity (odd, even, or none) and additional stop bits (1-2). Most systems however don't care about anything but 8-n-1, the minimum format.
Now that we've framed the data and captured the data bits, it'd be useful if we presented the data to the host, one byte at a time. So we grab the state of the shift register once RX bit 7 (MSB) has been clocked in, and transfer that to a latch. Even better, we push it to a FIFO and have some flag logic indicating it's ready for the host to read.
Finally, external to this UART logic itself, we see that RS-232 electrical interface isn't logic level, but instead higher voltage (roughly +/- 12V) levels. And, the waveform is inverted. To use RS-232 with logic you need to add a chip like a MAX232 to translate the voltage down to TTL. On the other hand, if you're using local TTL levels then this isn't a problem.
Here's a UART design in HDL for example. This code defines basic receive and transmit functions. There are many others to be found; it's a popular project for FPGA learners.
Postscript
That all said, if your goal is to interface a peripheral, maybe you need to think about using a microcontroller with a UART. The Arduino Nano is a good choice for this, and it includes a USB-to-serial on board. This is much more convenient than trying to find a PC with an actual serial port.
• In a discrete design we shouldn't need oversampling, as long as we can reset the clock whenever a bit transition is detected. Oversampling is only important when you have a fixed-frequency clock. – user253751 Feb 28 at 13:42
• It has nothing to do with a free-running clock, or lack thereof. The OP design tries to crash-lock the clock with the start bit, then ignore the successive bits for crash-lock using a counter, and hope that the resulting clock captures the rest of the bits correectly. As it is, it doesn't do a good job of aligning the sampling, which is why it fails. Sure, there's analog methods like one-shots that could possibly make an aligned pulse, but a digital method is simpler, more predictable, and can support multiple baud rates, which is why UART ICs use this method. – hacktastical Feb 28 at 18:11
• BTW, how did 16x get to be the standard sampling rate? As opposed to, say, 8x or 12x? Did it have to do with the characteristics of the actual waveform as sent over a line, combined with "typical" clock mismatch due to clock generation inaccuracy? I mean, is it somehow determined to be a worst case thing that you need 16x and 12x just won't do? Or something else? – davidbak Feb 28 at 18:19
• The higher the over sampling, the more accurately the clock sample can be placed in the middle of the data window. If I had to hazard a guess, 16x is a convenient power of 2, and they found 4x or 8x to be insufficient. – hacktastical Feb 28 at 18:25
• @hacktastical: Actually, odd sampling rates are better than even sampling rates, since when using a odd sampling rate the 'ideal' sampling time would be halfway between two clocking events, yielding a symmetrical acceptable timing window. Given how many systems use clocks that are a multiple of 1MHz, I find it odd that 13x divide ratios aren't more common, since 1MHz/26 would yield 38,400. – supercat Feb 28 at 22:12
Interesting circuit, and in fact, you're very nearly there. I see three problems.
1. If your input is really RS-232 and not TTL (as implied by your 9-pin connector), then you need a circuit to convert RS-232 levels to TTL levels. Back in the day, this would have been the 1489 RS-232 line receiver chip, but there are more modern alternatives today.
2. Your timing is way off. You want the 555 to generate a rising edge in the center of each data bit. The first rising edge is going to occur immediately upon coming out of reset, but then you want the next rising edge to occur 156 µs after that, and each subsequent edge to occur 104 µs after the previous one.
Fortunately, because of how the 555 works, it is possible to achieve this combination of timing. Some simple algebra reveals that R6 needs to be 4.722 times the value of R5. For example, if R5 is 1.0 kΩ and R6 is 4.7 kΩ, a capacitor value of 22.4 nF will give you the time intervals you need.
simulate this circuit – Schematic created using CircuitLab
If you run the simulation of the above circuit1, you'll see that the interval between the first two rising edges is about 156 µs, and the interval between rising edges after that is about 104 µs. This happens because the timing capacitor must charge all the way from zero in the first interval.
1. You're relying on a "glitch" to reset the circuit after each byte of data. It might be better to use the output of your R-S latch to drive the reset to the counter, although this creates a potential race between coming out of reset and the first rising edge from the 555.
I would suggest a slight modification to your circuit. Get rid of the NAND gates and instead use half of a 7474 D flip-flop.
simulate this circuit
When U3-Q is high, the circuit is idle. The start bit resets U3, allowing the 555 to run and the counter to count. The counter counts 1 to 8, and then on the ninth clock, U3 is set again. If you don't want a whole lot of "noise" on the parallel outputs while the data is being shifted in, add an 8-bit latch that is controlled by U3. Or switch to the 74595, which has such a latch built in.
1 Note that I had to tweak the capacitor value in the simulation in order to make the numbers come out right. In practice, you'll want to select a standard value for the capacitor, and replace the upper timing resistor with the combination of a 2200 Ω fixed resistor and a 5000 Ω trimpot set to about its midpoint. Adjust as needed to get the clock edges where they need to be.
• Shouldn't I achieve a delay of the 52u seconds and then start the 555 timer? – Albert Luna Feb 28 at 15:44
• It would be good to show on a scope trace how end-of-frame would need to be handled to ensure that the starting voltage is low enough to extend the length of the first pulse adequately. – supercat Feb 28 at 17:32
• @supercat: Note how much faster the discharge curve is in the simulation. The width of the stop bit is more than enough to prepare the circuit for the next start bit -- 104 us = 4.6*RC. But that does raise a point that I hadn't considered: if the last data bit is zero, it will prevent the circuit from entering the idle state at all. Hmm, back to the drawing board! As shown, the circuit is only good for 7-n-2 data. – Dave Tweed Feb 28 at 19:25
• @DaveTweed: Whether the stop bit is adequate would depend upon when one decides to reset the 555. If one resets the 555 after the last data bit, and won't detect a start condition until a falling edge on the data line, there would be plenty of time. If one doesn't reset the 555 until the nominal middle of the stop bit, the oscillator is running 3% slow, and the there is zero delay before the next start bit, the timing might be a bit tight, especially since the slop will flatten out as the voltage drops. – supercat Feb 28 at 22:03
• @DaveTweed: I don't doubt that the general design could be made to work, but a proper design should include an analysis of things like frequency tolerance, which would require seeing how the system behaves between the end of one byte and the start of the next. – supercat Feb 28 at 22:05 |
## Variational Helium Ground State Energy
We will now add one parameter to the hydrogenic ground state wave function and optimize that parameter to minimize the energy. We could add more parameters but let's keep it simple. We will start with the hydrogen wavefunctions but allow for the fact that one electron screens'' the nuclear charge from the other. We will assume that the wave function changes simply by the replacement
Of course the in the Hamiltonian doesn't change.
So our ground state trial function is
Minimize the energy.
We can recycle our previous work to do these integrals. First, replace the in with a and put in a correction term. This makes the part just a hydrogen energy. The correction term is just a constant over so we can also write that in terms of the hydrogen ground state energy.
Then we reuse the perturbation theory calculation to get the term.
Use the variational principle to determine the best .
Putting these together we get our estimate of the ground state energy.
Now we are within a few percent. We could use more parameters for better results.
Jim Branson 2013-04-22 |
## understanding the Hastings algorithm
Posted in Books, Statistics with tags , , , , , on August 26, 2014 by xi'an
David Minh and Paul Minh [who wrote a 2001 Applied Probability Models] have recently arXived a paper on “understanding the Hastings algorithm”. They revert to the form of the acceptance probability suggested by Hastings (1970):
$\rho(x,y) = s(x,y) \left(1+\dfrac{\pi(x) q(y|x)}{\pi(y) q(x|y)}\right)^{-1}$
where s(x,y) is a symmetric function keeping the above between 0 and 1, and q is the proposal. This obviously includes the standard Metropolis-Hastings form of the ratio, as well as Barker’s (1965):
$\rho(x,y) = \left(1+\dfrac{\pi(x) q(y|x)}{\pi(y) q(x|y)}\right)^{-1}$
which is known to be less efficient by accepting less often (see, e.g., Antonietta Mira’s PhD thesis). The authors also consider the alternative
$\rho(x,y) = \min(\pi(y)/ q(y|x),1)\,\min(q(x|y)/\pi(x),1)$
which I had not seen earlier. It is a rather intriguing quantity in that it can be interpreted as (a) a simulation of y from the cutoff target corrected by reweighing the previous x into a simulation from q(x|y); (b) a sequence of two acceptance-rejection steps, each concerned with a correspondence between target and proposal for x or y. There is an obvious caveat in this representation when the target is unnormalised since the ratio may then be arbitrarily small… Yet another alternative could be proposed in this framework, namely the delayed acceptance probability of our paper with Marco and Clara, one special case being
$\rho(x,y) = \min(\pi_1(y)q(x|y)/\pi_1(x) q(y|x),1)\,\min(\pi_2(y)/\pi_1(x),1)$
where
$\pi(x)\propto\pi_1(x)\pi_2(x)$
is an arbitrary decomposition of the target. An interesting remark in the paper is that any Hastings representation can alternatively be written as
$\rho(x,y) = \min(\pi(y)/k(x,y)q(y|x),1)\,\min(k(x,y)q(x|y)/\pi(x),1)$
where k(x,y) is a (positive) symmetric function. Hence every single Metropolis-Hastings is also a delayed acceptance in the sense that it can be interpreted as a two-stage decision.
The second part of the paper considers an extension of the accept-reject algorithm where a value y proposed from a density q(y) is accepted with probability
$\min(\pi(y)/ Mq(y),1)$
and else the current x is repeated, where M is an arbitrary constant (incl. of course the case where it is a proper constant for the original accept-reject algorithm). Curiouser and curiouser, as Alice would say! While I think I have read some similar proposal in the past, I am a wee intrigued at the appear of using only the proposed quantity y to decide about acceptance, since it does not provide the benefit of avoiding generations that are rejected. In this sense, it appears as the opposite of our vanilla Rao-Blackwellisation. (The paper however considers the symmetric version called the independent Markovian minorizing algorithm that only depends on the current x.) In the extension to proposals that depend on the current value x, the authors establish that this Markovian AR is in fine equivalent to the generic Hastings algorithm, hence providing an interpretation of the “mysterious” s(x,y) through a local maximising “constant” M(x,y). A possibly missing section in the paper is the comparison of the alternatives, albeit the authors mention Peskun’s (1973) result that exhibits the Metropolis-Hastings form as the optimum.
## the intelligent-life lottery
Posted in Books, Kids with tags , , , , , , , on August 24, 2014 by xi'an
In a theme connected with one argument in Dawkins’ The God Delusion, The New York Time just published a piece on the 20th anniversary of the debate between Carl Sagan and Ernst Mayr about the likelihood of the apparition of intelligent life. While 20 years ago, there was very little evidence if any of the existence of Earth-like planets, the current estimate is about 40 billions… The argument against the high likelihood of other inhabited planets is that the appearance of life on Earth is an accumulation of unlikely events. This is where the paper goes off-road and into the ditch, in my opinion, as it makes the comparison of the emergence of intelligent (at the level of human) life to be “as likely as if a Powerball winner kept buying tickets and — round after round — hit a bigger jackpot each time”. The later having a very clearly defined probability of occurring. Since “the chance of winning the grand prize is about one in 175 million”. The paper does not tell where the assessment of this probability can be found for the emergence of human life and I very much doubt it can be justified. Given the myriad of different species found throughout the history of evolution on Earth, some of which evolved and many more which vanished, I indeed find it hard to believe that evolution towards higher intelligence is the result of a basically zero probability event. As to conceive that similar levels of intelligence do exist on other planets, it also seems more likely than not that life took on average the same span to appear and to evolve and thus that other inhabited planets are equally missing means to communicate across galaxies. Or that the signals they managed to send earlier than us have yet to reach us. Or Earth a long time after the last form of intelligent life will have vanished…
Posted in Books, Travel with tags , , , , , , , , , , on August 23, 2014 by xi'an
I had planned my summer read long in advance to have an Amazon shipment sent to my friend Natesh out of my Amazon associate slush funds. While in Boston and Maine, I read Richard Dawkins’ The God delusion, the fourth Kelly McCullough’s Fallen Blade novel, Blade reforged, the second Ancient Blades novel, unrelated to the above, A thief in the night, by David Chandler, and also the second Tad Williams’ Bobby Dollar novel, Happy Hour in HellThe God delusion is commented on another post.
Blade reforged is not a major novel, unsurprisingly for a fourth entry, but pleasant nonetheless, especially when reading in the shade of a pavilion on Revere Beach! The characters are mostly the same as previously and it could be that the story has (hopefully) come to an end, with (spoilers!) the evil ruler replaced by the hero’s significant other and his mystical weapons returned to him. A few loose ends and a central sword fight with a more than surprising victory, but a good summer read. Checking on Kelly McCullough’s website, I notice that two more novels are in the making….
Most sadly, David Chandler’s A thief in the night had exactly the same shortcomings as another book I had previously read and maybe reviewed, even though I cannot trace the review or even remember the title of the book (!), and somewhat those of Tad Williams’ Happy Hour in Hell as well, that is, once again a subterranean adventure in a deserted mythical mega-structure that ends up being not deserted at all and even less plausible. I really had to be stuck on a beach or in an airport lounge to finish it! The points noted about Den of Thieves apply even more forcibly here, that is, very charicaturesque characters and a weak and predictable plot. With the addition of the unbearable underground hidden world… I think I should have re-read my own review before ordering this book.
## the god delusion [statistically speaking]
Posted in Books, Kids, pictures, Statistics, Travel, University life with tags , , , , , on August 22, 2014 by xi'an
While in Bangalore, I spotted Richard Dawkins’ The God delusion in the [fantastic if chaotic] campus bookstore and bought the Indian edition for a nominal amount. I read most of it during my week in Boston. And finished by the lake in Maine. While I agree with most of the points made in Dawkins’ book about the irrationality of religions, and of their overall negative impact on human societies, I found the first part rather boring in that I see little appeal in dissecting so minutely the [infinitely many] incoherences of religious myths and beliefs, as this will likely miss the intended target [i.e., literal believers]. Similarly, the chapter on evolution versus intelligent design made valuable points, albeit I had already seen them before. Nothing wrong with repeating those, in particular that evolution has little to do with chance, but again unlikely to convince the [fundamentalist] masses. Overall, the book mostly focus on the Judeo-Christian-Muslim branch of religions, which may reflect on the author’s own culture and upbringing but also misses the recent attempts of Buddhism to incorporate science into their picture.
“A universe in which we are alone except for other slowly evolved intelligences is a very different universe from one with an original guiding agent whose intelligent design is responsible for its very existence.” (p.85)
What is most interesting in the book (for me) is when Dawkins tries to set the God hypothesis as a scientific hypothesis and to apply scientific methods to validate or invalidate this hypothesis. Even though there is no p-value or quantitative answer at the end. Despite the highly frequent use of “statistical” and “statistically improbable” in the corresponding chapter. What’s even more fascinating is Dawkins’ take at Bayesian arguments! Either because it is associated with a reverent or because it relies on subjective prior assessments, Bayesian statistics does not fit as a proper approach. Funny enough, Dawkins himself relies on subjective prior probabilities when discussing the likelihood of find a planet such as Earth. Now, into the details [with the Devil1] in a rather haphazard order or lack thereof: Continue reading
## on intelligent design…
Posted in Books, Kids, Travel with tags , , , , , , , on August 19, 2014 by xi'an
In connection with Dawkins’ The God delusion, which review is soon to appear on the ‘Og, a poster at an exhibit on evolution in the Harvard Museum of Natural History, which illustrates one of Dawkins’ points on scientific agosticism. Namely, that refusing to take a stand on the logical and philosophical opposition between science and religion(s) is not a scientific position. The last sentence in the poster is thus worse than unnecessary…
## STEM forums
Posted in Books, R, Statistics, University life with tags , , , , , on August 15, 2014 by xi'an
“I can calculate the movement of stars, but not the madness of men.” Isaac Newton
When visiting the exhibition hall at JSM 2014, I spoke with people from STEM forums on the Springer booth. The concept of STEM (why STEM? Nothing to do with STAN! Nor directly with Biology. It stands as the accronym for Science, Technology, Engineering, and Mathematics.) is to create a sort of peer-reviewed Cross Validated where questions would be filtered (in order to avoid the most basic questions like “How can I learn about Bayesian statistics without opening a book?” or “What is the Binomial distribution?” that often clutter the Stack Exchange boards). That’s an interesting approach which I will monitor in the future, as on the one hand, it would be nice to have a Statistics forum without “lazy undergraduate” questions as one of my interlocutors put, and on the other hand, to see how STEM forums can compete with the well-established Cross Validated and its core of dedicated moderators and editors. I left the booth with a neat tee-shirt exhibiting the above quote as well as alpha-tester on the back: STEM forums is indeed calling for entries into the Statistics section, with rewards of ebooks for the first 250 entries and a sweepstakes offering a free trip to Seattle next year!
## Bangalore workshop [ಬೆಂಗಳೂರು ಕಾರ್ಯಾಗಾರ] and new book
Posted in Books, pictures, R, Statistics, Travel, University life with tags , , , , , , , , , , , , on August 13, 2014 by xi'an
On the last day of the IFCAM workshop in Bangalore, Marc Lavielle from INRIA presented a talk on mixed effects where he illustrated his original computer language Monolix. And mentioned that his CRC Press book on Mixed Effects Models for the Population Approach was out! (Appropriately listed as out on a 14th of July on amazon!) He actually demonstrated the abilities of Monolix live and on diabets data provided by an earlier speaker from Kolkata, which was a perfect way to start initiating a collaboration! Nice cover (which is all I saw from the book at this stage!) that maybe will induce candidates to write a review for CHANCE. Estimation of those mixed effect models relies on stochastic EM algorithms developed by Marc Lavielle and Éric Moulines in the 90’s, as well as MCMC methods. |
# Prove that among any $18$ consecutive three digit numbers there is at least one number which is divisible by the sum of its digits.
The question is as stated in the title. I feel like the question is geared towards some kind of case that yields to the divisibility test for $9$, but I think an argument can be made for other numbers as well. For example in the sequence $100$ ,...,$117$ we have of course $3 \mid 102$ and $9 \mid 117$, but we also have $2 \mid 110$ and $4 \mid 112$. And secondly in sequences where the digit sum is above $9$, we still have divisors that are multiples of $9$. For example $18 \mid 990$.
So if anyone could help me with a proof, I'd be really grateful. Thank you for your help.
• If all else fails, compile a list those of the 3-digit numbers that satisfy the property, and check that no gap is larger than 17 :-) – Henning Makholm Aug 1 '16 at 8:40
• @HenningMakholm Honestly, that is probably also the easiest way to do it :) – 5xum Aug 1 '16 at 8:41
• @HenningMakholm Only if all else fails though :P – Airdish Aug 1 '16 at 8:45
• Tis easily remedied, however, by appending, "then generalize to base $b$" to the problem. – Henning Makholm Aug 1 '16 at 8:45
• Interestingly the value of 18 is actually the smallest possible. There are no numbers with that required property in the 17 long ranges: $[559,575]$, $[667,683]$, $[739,755]$,$[847,863]$, $[937,953]$, $[973,989]$. – Ian Miller Aug 1 '16 at 8:53
## 1 Answer
We can use your idea: there is at least one multiple of $9$ in this sequence, so the sum of its digits will be $9$, $18$ or $27$. Let's separate some cases:
1. The sum is $27$: in this case, the number is $999$, so $990$ is also in the sequence and satisfies the properties.
2. The sum is $9$, and there is nothing to do.
3. The sum is $18$ and the number is even, and there is nothing to do again.
4. The sum is $18$ and the number is odd: in this case, let $x$ be this multiple of $9$. Then either $x+9$ or $x-9$ will be in the sequence, and it will fall in case 2. or 3. above. Anyway we are done.
• Perhaps it's clearer to say "every multiple of 18 has this property", so you need to deal with only case 2 and 3 (and 1 just to show that it's impossible). – Henning Makholm Aug 1 '16 at 9:02
• $999=27\times37$. – Gerry Myerson Aug 1 '16 at 9:23
• @GerryMyerson I hadn't noticed that. Thanks – Luiz Cordeiro Aug 1 '16 at 22:53 |
## enthalpy of dissolution of potassium nitrate in water
enthalpy of dissolution of potassium nitrate in water: Uncategorized
2 seconds ago
g��]]�� �l\.�kç�7E������=�;�ۂ��H�A�?��@��;^�����eΆe�ᙷ�A�iv�{kI�=��P��3�/89�-������w}ũ���4�������������P!�� �H���x_� �vT[��Q��k�$Ҷ�j1%\�&s��z$v�I�9m����q- �}��L���\,��!�2�����H�y=XV���`cMs=&D��_�iZt��y�m�Q��,�����\- �+M䋏����a���i.�SN�*8��%��ژg��f�T�I�V���aY���=�2綤!6 %���� \times 4.2 joules\) The value for ΔH is positive meaning that heat must be added for potassium nitrate to dissolve. Keywords: Alpha hydroxy acid, Lactic acid, Peroxomonosulphate, Manganese (II). A substance’s molar solution heat is the heat absorbed or released when one of the substance’s molecules is dissolved in water. Enthalpy of Dissolution of Potassium Nitrate: Frequently Asked Questions on Enthalpy of Dissolution of Copper Sulphate or Potassium Nitrate, CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, Important Questions For Class 11 Chemistry, Important Questions For Class 12 Chemistry, CBSE Previous Year Question Papers Class 10 Science, CBSE Previous Year Question Papers Class 12 Physics, CBSE Previous Year Question Papers Class 12 Chemistry, CBSE Previous Year Question Papers Class 12 Biology, ICSE Previous Year Question Papers Class 10 Physics, ICSE Previous Year Question Papers Class 10 Chemistry, ICSE Previous Year Question Papers Class 10 Maths, ISC Previous Year Question Papers Class 12 Physics, ISC Previous Year Question Papers Class 12 Chemistry, ISC Previous Year Question Papers Class 12 Biology. 1. A calorie is an energy unit. Transfer copper sulphate powder which is already weighed. Remove the solution until the entire sulfate of copper dissolves. Solubility product constant (see textbook: K sp, Sec. The rate of the reaction was studied under pseudo first order condition and found to increases with the [Mn(II)] and lactic acid. Here the mole ratio of solute and solvent is 1:200. Similar way we can find the enthalpy of dissolution of potassium nitrate. Beatriz Plascencia Jesus Zacarias 11/20/20 The Thermodynamics of Potassium Nitrate Dissolving in Water Data Table 1: Total Volume of KNO 3 solution and the temperature (°C) KNO 3 crystals appear Det. endobj stream To dissolve the solid and record the temperature, stir the solution well. Use a 0.1°C thermometer to record the water temperature. Weight of hydrated copper sulfate dissolved. Neutralization enthalpy is the change in enthalpy that occurs when one acid equivalent and one base equivalent undergoes a water and salt neutralization reaction. 3 Det. 2 Det. tel��/&;�O Enthalpy Of Dissolution Of Copper Sulphate Or Potassium Nitrate. As the reaction progresses, an exothermic reaction sheds heat energy, meaning it radiates heat while it is going on. 3 0 obj Aim Apparatus Theory Procedure Observation Result Precautions Viva Voce Based on the results a rate equation, kinetic scheme and a most probable mechanism has been predicted. Water equivalent of the polythene bottle. ΔS is calculated by taking ΔH and subtracting ΔG, then divide by the temperature in kelvin. This is calculated separately for each trial and then the separate values for each trial are averaged together. The sum of enthalpy changes occurring in the calorimeter either loss or energy gain must be zero, according to the energy conservation law. It helps illustrate how and why potassium nitrate (KNO3) dissolves in water. Assuming density and specific heat of the solution to be the same as that of water heat evolved or absorbed for dissolution of wg of the solute. When it dissolves, it dissociates into potassium (K+) and nitrate (NO3-) ions. <> It’s a special case of reaction enthalpy. endobj ΔH is calculated by taking the negative constant 8.314 J/K*mol and multiplying it by the negative slope. Enthalpy of dissolution ΔH is positive if heat is absorbed and negative if heat is evolved. Dissolving potassium nitrate in water is an endothermic process because the hydration of the ions when the crystal dissolves does not provide as much energy as is needed to break up the lattice. Solution enthalpy is the amount of heat released or absorbed when one mole of a solvent (solid/liquid) is dissolved in such a large amount of solvent (usually water) that further dilution does not change heat. 2 0 obj I.INTRODUCTION <>/ExtGState<>/ProcSet[/PDF/Text/ImageB/ImageC/ImageI] >>/MediaBox[ 0 0 612 792] /Contents 4 0 R/Group<>/Tabs/S/StructParents 0>> The overall heat of the solution can be either endothermic or exothermic, depending on the relative amount of energy needed to break bonds initially, as well as how much is released upon the formation of solute-solvent bonds. |
Matrix norms, divergences, metrics
I write the singular value decomposition of a $$d_1\times d_2$$ matrix $$\mathbf{B}$$
$\mathbf{B} = \mathbf{Q}_1\boldsymbol{\Sigma}\mathbf{Q}_2$
where we have unitary matrices $$\mathbf{Q}_1,\, \mathbf{Q}_1$$ and a matrix, with non-negative diagonals $$\boldsymbol{\Sigma}$$, of respective dimensions $$d_1\times d_1,\,d_1\times d_2,\,d_2\times d_2$$.
The diagonal entries of $$\boldsymbol{\Sigma}$$, written $$\sigma_i(B)$$ are the singular values of $$\mathbf{B}$$.
For Hermitian $$\mathbf{H}$$ matrices we may write an eigenvalue decomposition
$\mathbf{H} = \mathbf{Q}\boldsymbol{\Lambda}\mathbf{Q}^*$
For unitary $$\mathbf{Q}$$ and diagonal matrix $$\boldsymbol{\Lambda}$$ with entries $$\lambda_i{H}$$ the eigenvalues.
🏗
Frobenius norm
Coincides with the $$\ell_2$$ norm when the matrix happens to be a column vector.
We can define this in terms of the entries $$b_{jk}$$ of $$\mathbf{B}$$:
$\|\mathbf{B}\|_F^2:=\sum_{j=1}^{d_1}\sum_{k=1}^{d_2}|b_{jk}|^2$
Equivalently, if $$\mathbf{B}$$ is square,
$\|\mathbf{B}\|_F^2=\text{tr}(\mathbf{B}\mathbf{B}^*)$
If we have the SVD, we might instead use
$\|\mathbf{B}\|_F^2=\sum_{j=1}^{\min(d_1,d_2)}\sigma_{j}(B)^2$
Schatten norms
incorporating nuclear and Frobenius norms.
If the singular values are denoted by $$\sigma_i$$, then the Schatten p-norm is defined by
$\|A\|_{p}=\left(\sum _{i=1}^{\min\{m,\,n\}}\sigma _{i}^{p}(A)\right)^{1/p}.$
The most familiar cases are p = 1, 2, ∞. The case p = 2 yields the Frobenius norm, introduced before. The case p = ∞ yields the spectral norm, which is the matrix norm induced by the vector 2-norm (see above). Finally, p = 1 yields the nuclear norm
$\|A\|_{*}=\operatorname {trace} \left({\sqrt {A^{*}A}}\right)=\sum _{i=1}^{\min\{m,\,n\}}\!\sigma _{i}(A)$
Bregman divergence
🏗 Relation to exponential family and maximum likelihood.
Mark Reid: Meet the Bregman divergences:
If you have some abstract way of measuring the “distance” between any two points and, for any choice of distribution over points the mean point minimises the average distance to all the others, then your distance measure must be a Bregman divergence.
Warning! Experimental comments system! If is does not work for you, let me know via the contact form. |
# Statics with spring force
1. Sep 10, 2008
### 838
1. The problem statement, all variables and given/known data
Collar A can slide on a frictionless vertical rod and is attached as shown to a spring. The constant of the spring is 4lb/in., and the spring is unstretched when h=12in. Knowing that the system is in equilibrium when h=16in., determine the weight of the collar.
http://img182.imageshack.us/img182/9642/staticsuc5.th.png [Broken]
2. Relevant equations
F=-kx
k=4lb/in
x=16in-12in=4in
F=-kx=-(4lb/in)*(4in)=16lbs
3. The attempt at a solution
With the pulley, the spring would still be carrying 100% of the collar's weight.
16lbs for the collar weight can't be the answer, can it?
Any help would be appreciated, it's been a while since I've taken a course in physics.
Last edited by a moderator: May 3, 2017
2. Sep 10, 2008
### alphysicist
Hi 838,
If the string were pulling vertically upwards, then the spring force would be equal to the collar's weight.
However, in this case there is another force besides the spring force and gravity that must be cancelled for the collar to be in equilibrium. What is that force?
Do you now see how to relate the spring force and the weight?
Last edited by a moderator: May 3, 2017
3. Sep 10, 2008
### 838
The tension force from the rope?
Also, the pulley would cut the tension in half, so there would be 8lbs on each side, correct?
The angle that the pulley (B) makes with the collar (A) is 53.13 degrees, so would I break it into x and y components? Ah, no, that makes no sense.
I am thoroughly confused, this is probably an extremely simple problem that I'm just complicating.
4. Sep 10, 2008
### alphysicist
With the pulley massless and frictionless, the tension along the rope will be the same and equal to the spring force. So the question is how to relate the tension force to the weight.
That's the right idea. The tension is pulling on the collar at an angle. So it's pulling up and to the right. If you draw a free body diagram and apply Newton's law to the horizontal and vertical directions, you can see how the different forces cancel each other out.
In other words, once you have the vertical and horizontal components of the tension, what other force is the vertical component counteracting, and what other force is the horizontal component counteracting?
5. Sep 10, 2008
### 838
Ok, so I've split the force components. 6.4lbs in the y direction, and 4.8lbs in the x direction.
The vertical component is opposing gravity and the horizontal component is opposingthe bar that is holding the collar? This seems like it would make sense, since the system would be in equilibrium when all these forces cancel.
Would the collar weigh 6.4lbs?
6. Sep 10, 2008
### alphysicist
The tension is not cut in half by the pulley; it's magnitude is equal to the spring force (16 lbs). What then are the $x$ and $y$ components?
7. Sep 10, 2008
### 838
Ah, my mistake, I wrote 8lbs on my paper and neglected to erase.
x comp=16*sin(36.86)=9.6lbs
y comp=16*cos(36.86)=12.8lbs
So the collar weighs 12.8lbs.
8. Sep 10, 2008
### alphysicist
I was looking over how you got the force of 16lbs in the first place, and I think that's not quite right. The stretch in the spring is related to the change in the length of the hypotenuse.
However, the 4in increase is the increase in the length of the vertical leg. Using the Pythagorean theorem on the before and after triangle will let you find the change in length of the hypotenuse, and it's that change that will represent the stretch of the spring.
Once you find the new force (which will be somewhat less than 16lbs), I believe you can follow the same procedure as above and get the correct answer.
9. Sep 11, 2008
### 838
So, the hypotenuse BA when h=12 is 16.97, and when h=16 it is 20.
To find the spring force, F=-k(xo-xf) =-4lb/in(16.97in-20in)=12.12lbs
Now, to find the angle, which would be sin($$\theta$$)=12/20 so, $$\theta$$ = 36.869 degrees.
Now, splitting components, y=12.12*cos(36.869)=9.696lbs = vertical force.
x=12.12*sin(36.869)=7.32lbs = horizontal force.
So the weight of the collar should be 9.7lbs.
Thank you so much for your input. I'm sorry I'm a little slow when it comes to physics, like I said, it's been a while. |
# Lesson 9: Formula for the Area of a Triangle
Let’s write and use a formula to find the area of a triangle.
## 9.1: Bases and Heights of a Triangle
Study the examples and non-examples of bases and heights in a triangle. Answer the questions that follow.
• These dashed segments represent heights of the triangle.
• These dashed segments do not represent heights of the triangle.
Select all the statements that are true about bases and heights in a triangle.
1. Any side of a triangle can be a base.
2. There is only one possible height.
3. A height is always one of the sides of a triangle.
4. A height that corresponds to a base must be drawn at an acute angle to the base.
1. A height that corresponds to a base must be drawn at a right angle to the base.
2. Once we choose a base, there is only one segment that represents the corresponding height.
3. A segment representing a height must go through a vertex.
## 9.2: Finding the Formula for Area of a Triangle
• For each triangle, label a side that can be used as the base and a segment showing its corresponding height.
• Record the measurements for the base and height in the table, and find the area of the triangle. (The side length of each square on the grid is 1 unit.)
• In the last row, write an expression for the area of any triangle using $b$ and $h$.
row 0 triangle base (units) height (units) area (square units)
row 1 A
row 2 B
row 3 C
row 4 D
row 5 any triangle $b$ $h$
## 9.3: Applying the Formula for Area of Triangles
For each triangle, circle a base measurement that you can use to find the area of the triangle. Then, find the area of any three triangles. Show your reasoning.
## Summary
• We can choose any of the three sides of a triangle to call the base. The term “base” refers to both the side and its length (the measurement).
• The corresponding height is the length of a perpendicular segment from the base to the vertex opposite of it. The opposite vertex is the vertex that is not an endpoint of the base.
Here are three pairs of bases and heights for the same triangle. The dashed segments in the diagrams represent heights.
A segment showing a height must be drawn at a right angle to the base, but it can be drawn in more than one place. It does not have to go through the opposite vertex, as long as it connects the base and a line that is parallel to the base and goes through the opposite vertex, as shown here.
The base-height pairs in a triangle are closely related to those in a parallelogram. Recall that two copies of a triangle can be composed into one or more parallelograms. Each parallelogram shares at least one base with the triangle.
For any base that they share, the corresponding height is also shared, as shown by the dashed segments.
We can use the base-height measurements and our knowledge of parallelograms to find the area of any triangle.
• The formula for the area of a parallelogram with base $b$ and height $h$ is $b \boldcdot h$.
• A triangle takes up half of the area of a parallelogram with the same base and height. We can therefore express the area $A$ of a triangle as: $$A = \frac12 \boldcdot b \boldcdot h$$
• The area of Triangle A is 15 square units because $\frac12 \boldcdot 5 \boldcdot 6=15$.
• The area of Triangle B is 4.5 square units because $\frac12 \boldcdot 3 \boldcdot 3 = 4.5$.
• The area of Triangle C is 24 square units because $\frac12 \boldcdot 12 \boldcdot 4 = 24$.
In each case, one side of the triangle is the base but neither of the other sides is the height. This is because the angle between them is not a right angle.
In right triangles, however, the two sides that are perpendicular can be a base and a height.
The area of this triangle is 18 square units whether we use 4 units or 9 units for the base.
## Glossary
opposite vertex
#### opposite vertex
When you choose a side to be the base in a triangle, the vertex that is not an endpoint of the base is the opposite vertex.
Point $A$ is the opposite vertex to the base $\overline{BC}$
base/height of a triangle
#### base/height of a triangle
Any of the three sides of a triangle can be chosen as a base. The term base can also refer to the length of this side. Once we have chosen a base, the corresponding height is the length of a perpendicular segment from the base to the vertex opposite it. The opposite vertex is the vertex that is not an endpoint of the base. |
# tikz - Multiple nodes with same content
I would like to know how to place multiple nodes with same content in TikZ.
I've made a macro for it, but I think that it could have a different approach.
\usepackage{tikz}
\usepackage{bm}
\newcommand{\cross}{%
node {\LARGE\bm{$\times$}}%
}
\begin{document}
\begin{tikzpicture}
\draw[black]
(1,1) \cross
(3,1) \cross
(1,4) \cross
(3,4) \cross;
\end{tikzpicture}
\end{document}
So, my question is: is there any way to do something like this?
\cross{(1,1), (3,1), (1,4), (3,4)}
-
Welcome to TeX.sx! Usually, we don't put a greeting or a "thank you" in our posts. While this might seem strange at first, it is not a sign of lack of politeness, but rather part of our trying to keep everything very concise. Upvoting is the preferred way here to say "thank you" to users who helped you. – Harish Kumar Jan 20 '13 at 1:03
Oh, I'm sorry. I'll remember it next time, thanks. – Gutierrez PS Jan 20 '13 at 1:09
@GutierrezPS, you can use a loop: \foreach \p in {(1,1),(3,1),(1,4),(3,4)}{\draw \p node {\LARGE\bm{$\times$}};}. – Sigur Jan 20 '13 at 1:11
Hm, nice approach. But it's more code than I've posted. Is there a way to put this on a macro? I mean, a macro for \cross{(1,1), (3,1), (1,4), (3,4)} with this loop? – Gutierrez PS Jan 20 '13 at 1:27
Can you write your answer separate from your question? It helps make this site more organized. :-) – hpesoj626 Jan 20 '13 at 7:36
I've figured out that Sigur's solution was exactly what I wanted (I just needed to put in a macro). So, instead of
\newcommand{\contact}{ node {\LARGE\bm{$\times$}} }
I've used
\newcommand{\contacts}[1]{ %
\foreach \p in {#1}{\p node {\LARGE\bm{$\times$}}} %
}
In this case, \contacts should be used inside a \draw block:
\draw \contacts((1,1),(5,1),(1,4),(3,1),(5,4));
That produces something like:
- |
Mathematics
# Find the value of $x, y$ and $z$ in the figure.
$x=60^{o}, y=60^{o}, z=120^{o}$
##### SOLUTION
$z={ 120 }^{ \circ }$ (vertically opposite angles)
$120+x={ 180 }^{ \circ }$(sum of angles on a straight line=${ 180 }^{ \circ }$)
$\Rightarrow\ x={ 60 }^{ \circ }$
$\therefore y=x={ 60 }^{ \circ }$ (vertically opposite)
You're just one step away
Create your Digital Resume For FREE on your name's sub domain "yourname.wcard.io". Register Here!
Single Correct Medium Published on 09th 09, 2020
Questions 120418
Subjects 10
Chapters 88
Enrolled Students 87
#### Realted Questions
Q1 TRUE/FALSE Medium
State the following statement is True or False
The corresponding angle converse theorem states that :
If two parallel lines are cut by Transversal then the pair of corresponding angles are congruent
• A. True
• B. False
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q2 Subjective Medium
Ray $PQ$ and ray $PR$ are perpendicular to each other. Points $B$ and $A$ are in the interior and exterior of $\angle QPR$ respectively. Ray $PB$ and ray $PA$ are perpendicular to each other. Draw a figure all this rays and write-A pair of complementary angles A pair of supplementary anglesA pair of congruent angles.
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q3 One Word Medium
Fill in the blanks so as to make the following statements true:
If one angle of a linear pair is acute, then its other angle will be ......
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q4 Single Correct Medium
The numbers of points of intersection of $2y=1$ and $y=\sin\,x$, in $-2\pi\le x\le 2\pi$ is
• A. $1$
• B. $2$
• C. $3$
• D. $4$
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020
Q5 Subjective Medium
Iron roads $a,\ b,\ c,\ d,\ e$ and $f$ are making a design in a bridge as shown in Fig., in which a $\parallel b,\ c \parallel d,\ e \parallel \ f$. Find the marked angles between $b$ and $c$
Asked in: Mathematics - Lines and Angles
1 Verified Answer | Published on 09th 09, 2020 |
## T/R potency <5% [Regulatives / Guidelines]
❝ ❝ Somewhere in the Bioequivalence guidance I remembered that when designing the studies we try to keep T/R potency +/-5%. Can someone point out where this kind of condition existed, (latest guidance has no mention of this)
❝ – AFAIK, it was never stated by the FDA in any guidance.
It was there in 2001 guidance as desired requirement-not mandatory... problem is that I can not find that 2001 (first) BE/BA guidance
I can not find that 2001 (first) BE/BA guidance. Also in 2021 guidance lines 807-808 |
# Beginner : Interpreting Regression Model Summary [duplicate]
> sal <- read.csv("/Users/YellowFellow/Desktop/Salaries.csv",header
= TRUE)
> regressionModel = lm(sal$$Salary~sal$$Yrs.since.phd)
> summary(regressionModel)
Call:
lm(formula = sal$$Salary ~ sal$$Yrs.since.phd)
Residuals:
Min 1Q Median 3Q Max
-84171 -19432 -2858 16086 102383
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 91718.7 2765.8 33.162 <2e-16 ***
sal$Yrs.since.phd 985.3 107.4 9.177 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 27530 on 395 degrees of freedom Multiple R-squared: 0.1758, Adjusted R-squared: 0.1737 F-statistic: 84.23 on 1 and 395 DF, p-value: < 2.2e-16 The above is my result from the basic linear model that I've created. I've been trying to interpret these results for some time but I don't understand the mathematical formula's behind them or how to explain results such as Coefficients, Residuals & Multiple R-squared. Please be kind enough to explain this to me in a simplified manner. ## marked as duplicate by kjetil b halvorsen, Peter Flom♦ regression StackExchange.ready(function() { if (StackExchange.options.isMobile) return;$('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var $hover =$(this).addClass('hover-bound'), $msg =$hover.siblings('.dupe-hammer-message'); $hover.hover( function() {$hover.showInfoMessage('', { messageElement: $msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); May 29 at 11:03 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. • Are you sure you don't have the regressor and predictor confused? It makes more sense to me to have Salary be a function of Yrs.since.phd, than the other way around. – AkselA May 29 at 8:32 • @AkselA you are correct – BPDESILVA May 29 at 9:54 ## 2 Answers Let's make sure we are on the same page: you are estimating a model following the form $$Y <- \beta_0 + \beta_1X + \epsilon$$ where $$\epsilon$$ is a random variable that follows a normal distribution (zero-mean, and an unknown $$\sigma$$ standard deviation) Of course, $$\beta_0, \beta_1$$ and don't forget $$\sigma$$! is what we are trying to get by fitting the model to our data. Let's focus first on the coefficients: the "estimates" are easy: they are just the estimated values for $$\beta_0$$ and $$\beta_1$$ ("intercept" and "Salary" respectively). They are not the real $$\beta_0$$ and $$\beta_1$$, but rather the most reasonable values given the data on the sample. You are also told the standard estimation error. The t-value is nothing but the ratio between estimation and standard estimation error. If it is big, you will get a small p-value (like that 2.2e-16, or 0.00000000000000002) The p-value the result of a test for the hypothesis "$$\beta_1$$ (or the corresponding parameter) is a actually 0". That low p-value is telling you that "nobody believes $$\beta_1$$ to be 0. And what $$\beta_1 \neq 0$$ means is that $$X$$ is relevant in predicing $$Y$$ Above the coefficients, you have information about the residuals. The residuals are nothing but the distance between your data and what your model predicts for the data (remember, we have just a straight line, so most points of the training dataset will lay somewhere near it, but not exactly on it) Minimum and maximum are pretty self-explanatory. 1Q is the smaller value that is bigger than 25% of the residuals. Same about median (50%) and 3Q (75%) On the bottom of you have the standard error of the residuals (we don't talk about mean of residuals because it's always 0. Residuals are nothing but estimations of $$\epsilon$$) and its standard deviation is a good estimation for $$\sigma$$ The output also mentions degrees of freedom (for linear regression, number of observations - number of parameters) R-squared ($$R^2$$) measures goodness-of-fit (i.e.: what part of the variance in the target variable is explained by your model. In case of simple regression, it's just the square of the correlation coefficient between $$Y$$ and $$X$$) The adjusted $$R^2$$ is the same thing but compensating for the number of parameters (theoretically, we good increse our $$R^2$$ just by including more and more variables, without that meaning that the model is better. Adjusted $$R^2$$ is useful when comparing models with different number of parameters, so in simple regression we don't really care too much) The final line is a test on whether every parameter $$\beta$$, non including $$\beta_0$$ is different from 0. As we only have $$\beta_1$$, it is equivalent to the test we have on the coefficients block for $$\beta_1=0$$ • Thank you very much ! – BPDESILVA May 29 at 9:50 "formula's behind them or how to explain results such as Coefficients, Residuals & Multiple R-squared" Formula: $$\hat y = b_{0} + b_{1} * x_{i}$$ Coefficients: You have an intercept $$b_{0}$$ of 2.033 and regression weight $$b_{1}$$ of 1.784e-04. To visualize what that means look the following plot: The intercept is the value on the $$y$$ axis if $$x= 0$$ because $$\hat y = b_{0} + b_{1} * 0 = \hat y = b_{0}$$. Visually speaking that is the point where the regerssion line crosses the $$y$$ axis. The $$b_{1}$$ coefficient tells you how the predicted $$\hat y$$ values cahnge if $$x$$ changes by +1. Hence, a positive $$b_{1}$$ coefficient indicates an increasing and a negative $$b_{1}$$ coefficient indicates a falling regression line. In your case this means that if the x value is zero the dependend variable y is 2.033. Further, if x increases by 1, the dependent variable y increases by 1.784e-04. Residuals: You can make predictions with the formula above. You can predict what $$y$$ someone should have with a $$x$$ of 12,000, for example. In your case that would be: $$\hat y = 2.033 + 1.784e-04 * 12,000 = 4.1738$$ So accordnign to your model someone with a $$x$$ of 12,000 should have a y of 4.1738. But it may be that there actually are people in your dataset with a $$x$$ of 12,000 and it is likely that their actual y value is not exactly 4.1738 but let's say 6.1738 and 2.1738. So your prediction made some mistake which is 6.1738 - 4.1738= 2 for one and 2.1738 - 4.1738= -2 for the other person. As you can see the predicted value can be too high or too low and this could give a mean error of 0 (like here: mean of +2 and -2 is 0). This would be misleading because an error of zero implies there is no error. To avoid that we usually use squared the error values, i.e. (6.1738 - 4.1738)$$^{2}$$ and (2.1738 - 4.1738)$$^{2}$$. By the way, in OLS the regression coefficients are estimated by "minimizing the sum of the squares of the differences between the observed dependent variable (values of the variable being predicted) in the given dataset and those predicted by the linear function" (see here). R-square: This value tells you the proportion of the variation of your dependent variable y that was explained with the regression model. In your model the predictor explained 17.58% of the variation in your dependent variable. Keep in mind that you should use an adjusted version of R-squared if you want to compare models with different numbers of predictors. Note that you write sal$Yrs.since.phd ~ sal\$Salaryand if Yrs.since.phd means "years since Phd" it should possibly be the other way around: what you maybe want to do is to predict the salary of a person with the years since the Phd and not to predict the years since Phd with the salary. If so, you can simply switch both variables.
• Thank you very much ! – BPDESILVA May 29 at 9:50 |
Line bundles on $S^2$ and $pi_2(mathbb{R}P^2)$
Real line bundles on $$S^2$$ are all trivial, but what about the following way to think about a line bundle: we view a line bundle on $$S^2$$ (thought of as living in $$3$$d space) as providing a map $$S^2 to mathbb{R}P^2$$ the space of lines; and since $$pi_2(mathbb{R}P^2) simeq mathbb{Z}$$ we should have a $$mathbb{Z}$$ worth of different line-bundles.
Clearly I’ve cheated very badly in the latter line of thought; but I perhaps am still a bit confused at what step I have cheated so badly; is it that the following intuition just makes no sense?
• if I have a manifold $$M hookrightarrow mathbb{R}^n$$ and a line bundle $$L mapsto M$$; can I always find a vector field $$V$$ on $$mathbb{R}^n$$ that is not zero anywhere on $$M$$, such that the line bundle $$L$$ looks like what you get from ‘integrating $$V$$ around $$M$$‘? I think this is another way of saying my inuition that line bundles come from maps $$M to mathbb{R}P^{n-1}$$ which might be totally wrong; and which I’m not too sure how to formalise
EDIT : I just realised since $$pi_2(mathbb{R}P^3)$$ is trivial, even if my intuition above was correct, you’d still have just the trivial line bundle on $$S^2$$ since you can always "view the lines in a higher dimension" and rotate them away; if that makes any sense. Sorry if this is all just gibberish: just trying to learn
Mathematics Asked by questions on December 29, 2020
1 Answers
One Answer
The classifying space of real line bundles is $$mathbb{RP}^{infty}$$, not $$mathbb{RP}^2$$; $$mathbb{RP}^2$$ instead classifies line subbundles of the trivial $$3$$-dimensional real vector bundle $$mathbb{R}^3$$ (and similarly $$mathbb{RP}^n$$ classifies line subbundles of the trivial $$n+1$$-dimensional real vector bundle $$mathbb{R}^{n+1}$$).
So what the calculation of $$pi_2(mathbb{RP}^2)$$ vs. $$pi_2(mathbb{RP}^n), n ge 3$$ reveals is that there are a $$mathbb{Z}$$'s worth of real line subbundles of $$mathbb{R}^3$$ on $$S^2$$ but that these bundles all become isomorphic after adding an additional copy of $$mathbb{R}$$. With a little effort it should be possible to write down these line subbundles and the resulting isomorphisms explicitly. Probably the normal bundle of the embedding $$S^2 to mathbb{R}^3$$ is a generator.
If $$X$$ is any compact Hausdorff space then every vector bundle on it is a direct summand of a trivial bundle, so for line bundles what this tells us is that every line bundle is represented by a map $$X to mathbb{RP}^n$$ for some $$n$$ (but isomorphisms of line bundles may require passing to a larger value of $$n$$ to define). This connects up nicely with the picture where $$mathbb{RP}^{infty}$$ is the filtered colimit of the $$mathbb{RP}^n$$'s, because a map $$X to mathbb{RP}^{infty}$$ has image contained in some $$mathbb{RP}^n$$ by compactness.
Correct answer by Qiaochu Yuan on December 29, 2020
Related Questions
Fourier expansions of Eisenstein series as a Poincare series for the Fuchsian group
1 Asked on January 7, 2022 by lww
Is eigenvalue multiplied by constant also an eigenvalue?
1 Asked on January 7, 2022 by ruby-cho
Can someone explain the proof of the following linear differential equation
3 Asked on January 7, 2022 by lucas-g
Matroid induced by a matrix where a circuit’s nullspace is spanned by a non-negative vector
1 Asked on January 7, 2022 by kaba
Area between parabola and a line that don’t intersect? 0 or infinity
1 Asked on January 5, 2022
Positive integer solutions to $frac{1}{a} + frac{1}{b} = frac{c}{d}$
3 Asked on January 5, 2022
Are all complex functions onto?
4 Asked on January 5, 2022 by truth-seek
Proving Euler’s Totient Theorem
3 Asked on January 5, 2022
Why is identity map on a separable Hilbert space not compact? False proof.
1 Asked on January 5, 2022
Finding a general way to construct least degree polynomial having rational coefficient having irrational roots
1 Asked on January 5, 2022
Checking the MLE is consistent or not in $mathcal{N}(theta,tautheta)$.
1 Asked on January 5, 2022 by confuse_d
Conditions on inequalities $a>b$ and $b<c$ to deduce $a<c.$
3 Asked on January 5, 2022
Are $mathbb{C}-mathbb{R}$ imaginary numbers?
2 Asked on January 5, 2022 by unreal-engine-5-coming-soon
Show that $|uv^T-wz^T|_F^2le |u-w|_2^2+|v-z|_2^2$
3 Asked on January 5, 2022
Let $f,g$ be holomorphic function in $mathbb{D}$ that are continuous in $overline{mathbb{D}}$. Show that if $f=g$ on $|z|=1$, then $f=g$
1 Asked on January 5, 2022
What is the Fourier transform of the bump function $e^{-frac{1}{1-|x|^2}}$?
0 Asked on January 5, 2022 by medo
What is the valus of this integral?
0 Asked on January 5, 2022 by bachamohamed
Why are the limits of integration set as they are for the Laplace Transform?
1 Asked on January 5, 2022 by jonathan-x
Finding the local extrema of $f(x, y) = sin(x) + sin(y) + sin(x+y)$ on the domain $(0, 2 pi) times (0, 2 pi)$
2 Asked on January 5, 2022
Sum $sum_{(k_1, k_2, k_3): k_1+k_2+k_3=K, ,, n_1+n_2+n_3=N}k_1^{n_1}times k_2^{n_2} times k_3^{n_3}$
0 Asked on January 5, 2022
Ask a Question
Get help from others!
© 2022 AnswerBun.com. All rights reserved. |
# Search for collectivity with azimuthal J/$\psi$-hadron correlations in high-multiplicity p-Pb collisions at $\sqrt{s}$ = 5.02 and 8.16 TeV
Submission Date:
20/09/2017
Article Information
Submission Form
System:
p-Pb
Energy:
5.02 TeV
Energy:
8 TeV
Abstract Plain Text:
We present a measurement of angular correlations between inclusive \jpsi\ and charged
hadrons in p--Pb collisions with the ALICE detector at the CERN LHC. The \jpsi\ are reconstructed at forward (2.03 $<$ y $<$ 3.53) and
backward ($-$4.46 $<$ y $<$ $-$2.96) rapidity via their $\mu^+\mu^-$ decay channel while the charged hadrons are reconstructed at mid-rapidity
($-$1.8 $<$ $\eta$ $<$ 1.8). The correlations are expressed in terms of associated
charged-hadron yields per \jpsi\ trigger. A rapidity gap of at least 1.5 units between
the trigger \jpsi\ and the associated charged hadrons is required. Possible collective correlations are assessed by subtracting the
associated per-trigger yields in the low-multiplicity collisions from the ones in the high-multiplicity collisions.
After the subtraction we observe a strong indication of remaining symmetric structures at $\Delta\varphi$ $\approx$ 0 and $\Delta\varphi$ $\approx$ $\pi$,
similar to those previously found in two-particle correlations
at mid and forward rapidity. The corresponding second-order harmonic coefficient $v_2$ in the transverse momentum interval between 3 and 6 GeV/$c$
is found to be non-zero with a total significance of about 5$\sigma$.
The obtained results are similar to the $v_2$ coefficients measured in
Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02, suggesting a common mechanism at the origin of the \jpsi\ $v_2$.
Possible implications for the theoretical models of the J/$\psi$ production are discussed. |
Calculus 8th Edition
$f'(x) = -10x + 8$ Domain: All Real Numbers
$f(x) = 4 + 8x - 5x^{2}$ $f'(x) = 8x^{1-1} - (2)5x^{2-1}$ $f'(x) = -10x + 8$ Domain is all real numbers because there values of x that would make f'(x) undefined |
# Advantages of solvability, nilpotency and semisimplicity of Lie algebras?
After pondering on the notion of solvability, nilpotency and semi-simplicity of linear Lie algebras for days (I have been reading Humphreys' Introduction to Lie algebra lately), I remember a professor saying this many many years ago, the special Euclidean group SE(3) is not interesting since it is neither solvable nor simple. I couldn't stop thinking about what it could mean.
As an engineer, I understand that by having semi-simplicity and compactness, the negative killing form serves as a bi-invariant Riemannian metric. Nilpotency of a Lie algebra leads to closed form solution to non-holonomic motion planning problems. If it is linear endomorphisms we are talking about, solvability, nilpotency corresponds to upper triangular matrices and strictly upper triangular matrices under appropriate basis. Semi-simple endomorphisms admit Jordan Chevalley decomposition. These are some potential advantages for engineering.
My question is, are there any general guidelines, either mathematical or engineering, to appreciate these structures?
The question is rather broad, and a short answer is difficult. If it comes to "appreciate these structres", one could argue that (finite-dimensional) complex semisimple Lie algebras can be nicely classified. The result and the methods are beautiful. Solvable and nilpotent Lie algebras have a much more complicated behaviour in this respect. However, they are equally important, e.g., arising in physics (Heisenberg Lie algebras are nilpotent) and many other areas (from geometry, number theory, etc). In general every finite-dimensional Lie algebra $L$ in characteristic zero has a Levi-decomposition $$L=S \rtimes rad(L)$$ with a semisimple Levi-subalgebra $S$ and the solvable radial $rad(L)$, which is the largest solvable ideal of $L$. |
## The More You Know: DiVicenzo Criteria
The DiVicenzo Criteria for implementing a quantum computer are a set of requirements that any candidate (circuit model) quantum computer must satisfy. These five requirements, plus two relating to the communication of quantum information were formulated by David P. DiVincenzo in 2008 and are stated as follows:
1. A scalable physical system with well characterized qubits.
2. The ability to initialize the state of the qubits to a simple fiducial state, such as $|000...\rangle$.
3. Long relevant decoherence times, much longer than the gate operation time.
4. A “universal” set of quantum gates.
5. A qubit-specific measurement capability.
6. The ability to interconvert stationary and flying qubits $^1$.
7. The ability faithfully to transmit flying qubits between specified locations.
$^1$ Flying qubit: qubits that are readily transmitted from place to place.
References:
[1] “The Physical Implementation of Quantum Computation”, D. P. DiVincenzo. arXiv:quant-ph/0002077
I'm Marco Cerezo, I have a Ph.D in Physics and I'm currently a Postdoctoral Research Associate at Los Alamos National Laboratory in New Mexico, USA. My main fields of study are Quantum Information, Quantum Computing and Condensed Matter. Currently I'm working to develop novel quantum algorithms which can be useful in near-term quantum devices.
This entry was posted in The More You Know and tagged , , , , , , , , , . Bookmark the permalink. |
## Precalculus (6th Edition) Blitzer
$4$; $16$
Completing the square for the $x$ group can be done by adding the square of one-half of the middle term (x-term). Thus, $(\frac{4}{2})^2=4$ will be added to both sides of the equation. Completing the square for the $y$ group can be done by adding the square of one-half of the middle term (y-term). Thus, $(\frac{-8}{2})^2=(-4)^2=16$ will be added to both sides of the equation. Therefore, the missing expressions in the given statements are: $4$ and $16$, respectively. |
# What is the meaning of normalization of varieties in complex geometry?
There is a question already asked here about this. But I know almost nothing of algebraic geometry, nothing fancy to understand the answer. So I would highly appreciate an elementary explanation to my question.
I encountered the term normalization while I was trying to understand that a particular algebraic curve is smooth. My questions are:
1) What is the meaning of normalization?
2) Why do we perform it?
3) How is it related to smoothness of algebraic curves? To singularities of curves?
4) Is normalization cannonical? If so, how?
#### Solutions Collecting From Web of "What is the meaning of normalization of varieties in complex geometry?"
0) Recall that a domain $A$ is said to be normal if it is integrally closed in its fraction field $K=Frac(A)$.
This means that any element $q\in K$ killed by a monic polynomial in $A[T]$, i.e. such that for some $n\gt 0, a_i\in A$ one has $$q^n+a_1q^{n-1}+\cdots+a_n=0$$ already satisfies $q\in A$ .
A variety $V$ is said to be normal if it can be covered by open affines $V_i\subset V$ whose associated rings of functions $A_i=\mathcal O(V_i)$ are normal.
1) The normalization of an irreducible variety $X$ is a morphism $n:\tilde X\to X$ such that $\tilde X$ is a normal variety and there exists a closed subvariety $Y\subsetneq X$ such that $n|(\tilde X\setminus n^{-1}(Y))\stackrel {\cong}{\to}X\setminus Y$ is an isomorphism.
2) We perform normalization because normal varieties have better properties than arbitrary ones.
For example in normal varieties regular functions defined outside a closed subvariety of codimension $\geq 2$ can be extended to regular functions defined everywhere (“Hartogs phenomenon”) .
3) A curve is non-singular (=smooth if the base field is algebraically closed ) if and only if it is normal, so that normalization=desingularization for curves.
In higher dimensions normal varieties, alas, may have singularities.
Getting rid of these is tremendously difficult in characteristic zero (Hironaka) and is an unsolved challenge in positive characteristics.
4) Yes, normalization of $X$ is canonical in the sense that if $n’: X’\to X$ is another normalization we have an isomorphism $j:\tilde X \stackrel {\cong}{\to} X’$ commuting with the normalization morphisms, namely $n’\circ j=n$ .
At the basis of this canonicity is the fact that there is a (trivial) canonical procedure for enlarging a domain to its integral closure in its fraction field.
I like Georges Elencwajg’s answer, but I think it’s useful to see some topological intuition for what normalization does over $\mathbf{C}$.
Note we say a variety is normal if its local rings are integrally closed in their fraction field.
## Riemann Extension Theorem
This fleshes out 2) in Georges Elencwajg’s answer. Most of what follows is from Kollár’s article “The structure of algebraic threefolds”. I also enjoy the discussion around p. 391 in Brieskorn and Knörrer’s book Plane algebraic curves.
In complex analysis, you learn about the Riemann extension theorem, which says a bounded meromorphic function on any open set $U \subset \mathbf{C}$ that is holomorphic on $U \setminus \{p\}$ is in fact holomorphic on $U$. In (complex) algebraic geometry, we want something similar to hold (let’s say, for curves): that a bounded rational function that is regular on $U \setminus \{p\}$ is in fact regular on $U$.
This fails in general, however:
Example (cuspidal cubic). Let $V = \{x^2 – y^3 = 0\} \subset \mathbf{C}^2$, and let $f = (x/y)\rvert_V$. $f$ is a rational function on $V$, regular away from $0$. You can of course demand $f(0,0) = 0$ to make $f$ continuous at $(0,0)$, but this does not make $f$ regular. For, suppose $x/y = a(x,y)/b(x,y)$ for some polynomials $a,b$ such that $b(0,0) \ne 0$. Then, $xb(x,y) – ya(x,y) = 0$ on $V$, so $x^2 – y^3$ divides it. But there is a nonzero constant term in $b(x,y)$ which contributes a nonzero coefficient for $x$ in $xb(x,y) – ya(x,y)$, so it can’t be zero. Note, though, that $(x/y)^2 = y$ is regular on $V$, which shows $V$ is not normal.
The question then becomes: can we modify the curve $V$ so that the Riemann extension theorem does hold? The answer is that yes, the normalization in fact does this for us: it gives another variety $\tilde{V}$ such that the rational functions on $V$ and $\tilde{V}$ agree, but an extension property like the one above holds. This extension property is the content of
Hartog’s Theorem. Let $V$ be a normal variety and let $W \subset V$ be a subvariety such that $\dim W \le \dim V – 2$. Let $f$ be a regular function on $V – W$. Then $f$ extends to a regular function on $V$.
But returning to our example: the map $\mathbf{C} \to V$ sending $z \mapsto (z^3,z^2)$ is in fact a normalization. The function $x/y$ then pulls back to $z$ on $\mathbf{C}$, which is obviously regular!
Remark. It is possible to define normality as saying every rational function that is bounded in a neighborhood $U$ of a point $p$ is in fact regular on $U$, in direct analogy to the Riemann extension theorem. But the equivalence of these definitions is hard: see Kollár, Lectures on resolution of singularities, §1.4, especially Rem. 1.28.
## Separating Branches
What follows is from Mumford’s The red book of varieties and schemes, III.9.
Normality can be understood as a way to separate the “branches” of an algebraic variety at a singular point. Consider the following
Example (nodal cubic). Let $V = \{x^2(x+1) – y^2\} \subset \mathbf{C}^2$. It is not normal at $(0,0)$ since it’s singular there. Consider a small analytic neighborhood
$U = \{(x,y) \mid \lvert x \rvert < \epsilon,\ \lvert y \rvert < \epsilon\}$.
Points in $U \cap V$ satisfy $\lvert x – y \rvert \lvert x + y \rvert = \lvert x \rvert^3 < \epsilon \lvert x \rvert^2$ hence $\lvert x – y \rvert < \sqrt{\epsilon} \lvert x \rvert$ or $\lvert x + y \rvert < \sqrt{\epsilon} \lvert x \rvert$, but both can’t occur simultaneously for small enough $\epsilon$. Thus, near the origin $V$ splits into two “branches” containing points satisfying $\lvert x – y \rvert \ll \lvert x \rvert$ and $\lvert x + y \rvert \ll \lvert x \rvert$. Each piece is connected, but there is no algebraic way to separate each branch.
The normalization $\pi\colon \tilde{V} \to V$ ends up fixing this, in the following way: for each point $p \in V$, the inverse image $\pi^{-1}(p)$ is in 1-1 correspondence with the set of branches at $p$. In our particular example, it is given by $\mathbf{C} \to V$ where $z \mapsto (z^2-1,z(z^2-1))$; the two branches correspond to $z=\pm1$.
So perhaps a variety $V$ is normal if and only if at every point $p \in V$, there is only one branch. The forward direction is essentially the content of Zariski’s main theorem; see pp. 288–289 in Mumford. But the converse is false: the cuspidal cubic only has one branch but is not normal. |
# Evolving Neural Networks in JAX
Published:
“So why should I switch from <insert-autodiff-library> to JAX?”. The classic first passive-aggressive question when talking about the new ‘kid on the block’. Here is my answer: JAX is not simply a fast library for automatic differentiation. If your scientific computing project wants to benefit from XLA, JIT-compilation and the bulk-array programming paradigm – then JAX provides a wonderful API. While PyTorch relies on pre-compiled kernels and fast C++ code for most common Deep Learning applications, JAX allows us to leverage a high-level interface for programming your favourite accelerators. vmap, pmap, jit accelerate and vectorize across array dimensions/compute devices without having do deal with asynchronous bookkeeping of processes. But this is not restricted to standard gradient-based optimization setups. It also applies to many evolutionary methods. Therefore, in this post we explore how JAX can power the next generation of scalable neuroevolution algorithms:
1. We will walk through the Covariance Matrix Adaptation Evolution Strategies (CMA-ES, e. g. Hansen, 2016) and discuss challenges such as the ‘curse of dimensionality’ and the statistical estimation of high-dimensional covariance matrices.
2. We will implement the CMA-ES update equations in JAX. We will show how get the most out of vmap and vectorize over two crucial dimensions of ES: The generation population size and the number of fitness evaluations per population member.
3. We will evolve a feedforward policy to balance a Pendulum using CMA-ES. Afterwards, we explore different hyperparameters (neural network size, mean learning rate and degree of selection) to get a better intuition for key trade-offs in ES.
4. Finally, we will analyze run and compilation times for CMA-ES generation iterations and across different hardware platforms. We will see that XLA-compilation and vectorization of vmap smoothly scales on different platforms (CPU/different GPUs).
TL;DR: JAX is awesome for scaling neuroevolution algorithms. We can vmap over both the parametrization of all population members and their stochastic fitness evaluations. By eliminating multiprocessing/MPI communication shenanigans, we can run neuroevolution experiments on modern accelerators (GPU/TPU) and almost zero engineering overhead. If you want to learn how this looks like for CMA-ES, come along for a ride. The notebook can be found here.
Note: Throughout the post we will assume that you already know the basic primitives of JAX such as jit, vmap and lax.scan. If you feel like you need to catch up on these, checkout the JAX quickstart guide or my JAX intro blog post.
Let’s start by installing and importing a couple of packages.
try:
import jax
except:
import jax
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import jax
import jax.numpy as jnp
from jax import jit, vmap, lax
import functools, time
import matplotlib.pyplot as plt
# Evolutionary Strategies and the Ask-Evaluate-Tell API
Evolutionary strategies aim to minimize a black-box function without the use of explicit analytical derivatives or gradient estimates. Instead, they rely on stochastic evaluations of the (potentially noisy) function of interest $f: \mathbb{R}^d \to \mathbb{R}$. The previously accumulated information is then cleverly integrated to inform the search distribution $\pi(\cdot)$. Th sequential batches of proposal solutions $x_i^{(g)} \in \mathbb{R}^d, i=1,\dots,N$ are also known as individuals of a generation $g$. In neuroevolution $x$ corresponds to the parameters of a neural network and $f(.)$ denotes some metric of performance. Importantly, we are not relying on backprop and the resulting gradients, the objective does not have to be smooth and differentiable. This has a couple of practical implications:
1. We have a lot more freedom to design neural networks. Most common deep learning building blocks such as convolutional, linear and attention layers are designed with gradient descent in the back of our mind. But if we don’t need gradients to learn, we could use non-differentiable non-linearities such as spike and threshold-based activations.
2. We don’t have to carefully handcraft objective functions, which support the learning dynamics prescribed by gradient descent. Instead, we can directly optimize the function of interest. E.g. in RL we can directly aim to maximize the episode return without the need for surrogate objectives such as the mean squared Bellman error.
3. Instead of optimizing a single point estimate of the solution parameters, ES keep a a search distribution over solutions. This surrogate of the objective landscape can be used to sample solutions, which may be diverse in how they solve a task. This heterogeneity can for example be used to ensemble predictions.
ES can be thought of as search algorithms, which use a type of memory or surrogate model of the objective. The specific form of the surrogate $\pi_\theta$ differs across ES (e.g. for CMA-ES it will be a scaled multivariate Gaussian). The general procedure has three repeating steps:
1. Ask: Given the current surrogate model, we “ask” for a set of evaluation candidates or a new generation, $x \sim \pi_\theta(\cdot)$.
2. Evaluate: We evaluate the fitness of each member $x_i$ in the proposed generation using the objective $f(\cdot)$, which returns a “utility” scalar. For stochastic objectives we potentially have to do so multiple times to obtain a reliable fitness estimate.
3. Tell: Next, we use $(x, f(x))$ to update the model $\theta \to \theta’$. The search distribution will be adjusted to increase the likelihood of well performing candidates. This update specifies how to cope with the exploration-exploitation trade-off.
We iterate over these three steps for a set of generations $g = 1, \dots, G$. This three-step procedure is commonly implemented in hyperparameter optimisation toolboxes (e.g. scikit-optimize) and provides a minimal minimal API for interfacing with ES (e.g. see David Ha’s blog).
# The CMA-ES … and how to implement it in JAX
In CMA-ES the search distribution $\pi_\theta$ is a multivariate Gaussian with mean $m \in \mathbb{R}^d$ and scaled covariance matrix $\sigma^2 \cdot C \in \mathbb{R}^{d \times d}$. The update step adjusts the mean and covariance of the Gaussian and the stepsize standard deviation $\sigma$. Loosely speaking, the mean will be pulled into the direction of the best performing candidates, while the covariance update will aim to align the density contour of the sampling distribution with the contour lines of the objective and thereby the direction of steepest descent. That sounds plausible – but what are desirable characteristics of efficient search algorithms in the first place? It should generalizes across a wide range of functions $f(.)$ and be robust to a variety of transformations of input and output. This robustness meta-objective can be recast under the notion of invariance properties and we will see that several of these apply to the solution quality of CMA-ES:
• Invariance to order-preserving transformations of the fitness function via ‘scale-ignorant’ rank truncation selection. E.g. the solution should not change if we add a constant to the objective function.
• Invariance to angle-preserving transformations of the search space if initial search point is transformed accordingly.
Let’s now see how CMA-ES accomplished these meta-objectives and examine the individual update equations of CMA-ES as well as how to implement them in JAX. I have taken the freedom to code up some helper functions, which aren’t essential to our understanding of either JAX or CMA-ES. This includes how we initialize and terminate the strategy and some logging utilities. Feel free to check out the source files in the linked repository.
from helpers.es_helpers import (init_cma_es, # Initializes the parameters of the strategy
eigen_decomposition, # Performs an eigendecomposition of a PSD matrix
check_termination, # Checks whether the search has converged/diverged
init_logger, # Initializes a logging dict for the search
from helpers.viz_helpers import plot_fitness, plot_sigma # Visualize fitness and stepsize log
## The Ask & Tell Interface
The first step is to define the core functionality of CMA-ES and the way in which we interface with the search algorithm: By asking for a set of proposal candidates from the search distribution and evaluating the candidates. Only afterwards, we can update the strategy with the information gathered during the evaluation. In CMA-ES, the ask-step samples from a multivariate Gaussian, where the sufficient statistics define the direction of the search. Intuitively, the mean should move closer to the best performing candidates, while the covariance should point our search into the direction of steepest descent. To efficiently sample from this potential high-dimensional Gaussian, we will use the reparametrization trick:
$\mathcal{N}(m, C) \sim m + B D \mathcal{N}(\mathbf{0}, \mathbf{1}), \text{ with }$ $C^{1/2} = BDB^T \text{ and } B^T \mathcal{N}(\mathbf{0}, \mathbf{1}) \sim \mathcal{N}(\mathbf{0}, \mathbf{1}).$
The eigendecomposition of $C$ factorizes the covariance into $B$ and $D$. $B$ is an orthogonal matrix, whose columns form an orthonormal basis of eigenvectors. $D$ is a diagonal matrix of square roots of the corresponding eigenvalues of $C$. Intuitively, $D$ scales the the spherical ‘base’ Gaussian distribution and can be viewed as a dimension-specific step-size matrix. The diagonal entries correspond to the standard deviations of the individual $d$ dimensions of our optimization problem. Hence, it controls how far the search distribution ‘reaches out’ along a specific axis. $B$, on the other hand, defines the orientation of these principal axis. In short: $D$ scales, $B$ orients. Finally, $\sigma$ is a dimension-independent step-size. In code this looks as follows:
def ask(rng, params, memory):
""" Propose parameters to evaluate next. """
C, B, D = eigen_decomposition(memory["C"], memory["B"], memory["D"])
x = sample(rng, memory, B, D, params["n_dim"], params["pop_size"])
memory["C"], memory["B"], memory["D"] = C, B, D
return x, memory
@functools.partial(jit, static_argnums=(4, 5))
def sample(rng, memory, B, D, n_dim, pop_size):
""" Jittable Multivariate Gaussian Sample Helper. """
z = jax.random.normal(rng, (n_dim, pop_size)) # ~ N(0, I)
y = B.dot(jnp.diag(D)).dot(z) # ~ N(0, C)
y = jnp.swapaxes(y, 1, 0)
x = memory["mean"] + memory["sigma"] * y # ~ N(m, σ^2 C)
return x
The memory stores all the variables which are exposed to an update from the ES. params, on the other hand, are fixed hyperparameters such as the different learning rules or population size. We will reuse the eigendecomposition of the covariance later on, so that we can amortize the computation by storing $C, B, D$ in our memory dictionary. Having obtained a set of $\lambda$ candidates from an ask-function call, we can use the objective function to evaluate their fitness.
The update of a CMA-ES generation consists of 5 sequential update equations: $m$, $p_\sigma$, $\sigma$, $p_c$, and $C$ update. These are the ones stored in memory. The resulting dynamics prescribe how our search distribution evolves over the consecutive generations. Schematically, this looks as follows:
Wait! But where do $p_\sigma$ and $p_c$ come from? Empirical estimation of $C$ from a single generation is hard, especially when the number of parameters is larger than the population size $d » N$ (which will usually be the case in when evolving neural nets). We therefore want to obtain a reliable estimate by leveraging information accumulated by previous generations. And this is where the different evolution paths come into play. Their role is to track the changes in the mean statistic and how different updates affected the next generations performance. $p_c$ is then used to inform the update of the anisotropic part of the overall variance (so $C$) and $p_c$ for the update isotropic part (so $\sigma$). Don’t worry about the equations yet, we will soon see how all of this works in detail. For now just remember that $p_c$ and $p_\sigma$ provide a memory trace that integrates over past updates. The function below wraps all five update steps as well as the initial sorting of the solutions in call:
def tell_cma_strategy(x, fitness, params, memory):
""" Update the surrogate ES model. """
# Update/increase the generation counter
memory["generation"] = memory["generation"] + 1
# Sort new results, extract parents, store best performer
concat_p_f = jnp.hstack([jnp.expand_dims(fitness, 1), x])
sorted_solutions = concat_p_f[concat_p_f[:, 0].argsort()]
# Update mean, isotropic path, stepsize, anisotropic path, cov.
mean, y_k, y_w = update_mean(sorted_solutions, params, memory)
memory["mean"] = mean
p_sigma, C_2, C, B, D = update_p_sigma(y_w, params, memory)
memory["p_sigma"], memory["C"], memory["B"], memory["D"] = p_sigma, C, B, D
sigma, norm_p_sigma = update_sigma(params, memory)
memory["sigma"] = sigma
p_c, ind_sigma = update_p_c(y_w, norm_p_sigma, params, memory)
memory["p_c"] = p_c
C = update_covariance(y_k, ind_sigma, C_2, params, memory)
memory["C"] = C
return memory
# JIT-compiled version for tell interface
tell = jit(tell_cma_strategy)
Let’s take a closer look at the individual update steps and how their implementation in JAX:
### Update 1: Mean Update via Truncation Selection and Reweighting
We start with the mean, which relies on truncation selection. Instead of letting all candidates pull equally on the mean update, we will only consider a subset of the top performers in the population (or parents $\mu$) to influence the update of $m$. Usually the set of parents is chosen to be around 50 percent of the entire population. The weight of each parent decreases as their rank in the population decreases.
$m^{(g+1)} = m^{(g)} + c_m \sum_{i=1}^\mu w_i (x_{i:\lambda} - m^{(g)}),$
where $x_{:\lambda}$ denotes the fitness-sorted candidates of generation $g$ and $c_m$ represents the learning rate of the mean update. The weights $w_i$ are typically chosen to be decreasing so that the very best performing solutions are given more influence. Here is the default case for a population size $\lambda = 100$ and $\mu = 50$:
In the code below additionally we define $y_k = \frac{x_k - m^{(g)}}{\sigma^{(g)}}$ (z-scored parameters) and $y_w = \sum_{i=1}^\mu w_i y_{k, i:\lambda}$, the weighted sum over the selected and mean-normalized parent parameters. update_mean returns the updated mean and both $y_k$ and $y_w$. These will later be reused for the covariance update step.
def update_mean(sorted_solutions, params, memory):
""" Update mean of strategy. """
x_k = sorted_solutions[:, 1:] # ~ N(m, σ^2 C)
y_k_temp = (x_k - memory["mean"]) # ~ N(0, σ^2 C)
y_w_temp = jnp.sum(y_k_temp.T * params["weights_truncated"], axis=1)
mean = memory["mean"] + params["c_m"] * y_w_temp
# Comple z-scoring for later updates
y_k = y_k_temp / memory["sigma"]
y_w = y_w_temp / memory["sigma"]
return mean, y_k, y_w
### Update 2: Isotropic Evolution Path Update
In the next two steps we will derive an update for the isotropic part of the covariance matrix, a.k.a. the scaling by the ‘stepsize’ $\sigma^{(g)}$. CMA-ES uses an evolution path $p_\sigma \in \mathbb{R}^d$, which integrates over previous steps to perform cumulative step length adaptation:
$p_\sigma \leftarrow (1 - c_\sigma) p_\sigma + \sqrt{1 - (1-c_\sigma)^2} \sqrt{\mu_w} C^{-1/2} \frac{m^{(g+1)} - m^{(g)}}{\sigma^{(g)}}$
def update_p_sigma(y_w, params, memory):
""" Update evolution path for covariance matrix. """
C, B, D = eigen_decomposition(memory["C"], memory["B"], memory["D"])
C_2 = B.dot(jnp.diag(1 / D)).dot(B.T) # C^(-1/2) = B D^(-1) B^T
p_sigma_new = (1 - params["c_sigma"]) * memory["p_sigma"] + jnp.sqrt(
(1 - (1 - params["c_sigma"])**2) *
params["mu_eff"]) * C_2.dot(y_w)
_B, _D = None, None
return p_sigma_new, C_2, C, _B, _D
Loosely speaking, this is meant to modulate exploration-exploitation trade-off in the following two cases:
1. If two update steps are anti-correlated (they point in opposite directions), then we are not really moving anywhere in parameter space. The updates go back and forth without a clear direction to move into, which indicates convergence. In this case cumulative step length adaptation will decrease $\sigma$.
2. If, on the other hand, the steps are pointing in the same direction, this will increase the stepsize so that the search progresses faster in the direction of consensus. Intuitively, this behavior is similar to how momentum works in gradient-based optimization.
The speed of adaptation and the timescale of integration depends on two crucial factors: the learning rate of $p_\sigma$, $c_\sigma$ and the size of the eigenvalues of $C$. The larger $c_c$ the faster $p_\sigma$ will respond, but also the smaller the integration timescale. The precision of the covariance, on the other hand, provides an additional rescaling which interacts in non-trivial ways. So how do we actually update $\sigma$?
### Update 3: Cumulative Step Length Adaptation
The stepsize is a scalar, while $p_\sigma$ is $d$-dimensional. So we need to reduce things. The norm of $p_\sigma$ provides a measure of aggregated step length and a simple moving statistic for whether to increase or decrease $\sigma$. We will skip some math here, but one can show that $p_\sigma$ is in expectation standard normally distributed. We can then use an exponentially scaled update if $\mid \mid p_\sigma \mid \mid$ deviates from its expectation:
$\sigma^{(g+1)} = \sigma^{(g)} \cdot \exp \left(\frac{c_\sigma}{d_\sigma} \left(\frac{\mid \mid p_\sigma \mid \mid}{\mathbb{E}[ \mid \mid \mathcal{N}(0, 1) \mid \mid ]} -1\right)\right)$
def update_sigma(params, memory):
""" Update stepsize sigma. """
norm_p_sigma = jnp.linalg.norm(memory["p_sigma"])
sigma = (memory["sigma"] * jnp.exp((params["c_sigma"] / params["d_sigma"])
* (norm_p_sigma / params["chi_d"] - 1)))
return sigma, norm_p_sigma
Note that if $\mid \mid p_\sigma \mid \mid = \mathbb{E} [\mid \mid \mathcal{N}(0, 1) \mid \mid ]$, there won’t be any change in the stepsize. $d_\sigma \approx 1$ is the so-called damping parameter and re-scales the magnitude change of $\ln \sigma$.
### Update 4: Anisotropic Evolution Path Update
So far so good. We now have an update formula for the mean and the isotropic part of the variance. Finally, we need a procedure for estimating the covariance. A natural starting point could be the sample estimate based on the current $x^{(g+ 1)}$. But this can be highly unreliable for cases in which the number of parameters is a lot larger than the number of parents, $d » \lambda$. This is another example of the statistical challenge of the curse of dimensionality. Instead of relying only on the most recent generation, CMA-ES again uses an adaptation procedure, which exploits the structure in successive update steps:
$p_c \leftarrow (1-c_c) p_c + \mathbf{1}_{[0, f(d, g)]}(||p_\sigma||)\sqrt{1 - (1-c_c)^2} \sqrt{\mu_w} \frac{m^{(g+1)} - m^{(g)}}{\sigma^{(g)}}$
At first glance this equation looks very similar to the update of the isotropic path update of $p_\sigma$. But there are two significant differences:
1. We don’t rescale $\frac{m^{(g+1)} - m^{(g)}}{\sigma^{(g)}}$ by the square-root of the covariance matrix $C^{-1/2}$. Hence, it remains an anistropic variable.
2. The update depends on a boolean. The indicator function ‘stalls’ the $p_c$ update if the norm of $p_\sigma$ gets too large. This prevents an overshooting of the axes of $C$, when the stepsize is too small. Hansen (2016) notes that this is especially useful when the initial $\sigma^{(0)}$ is chosen to be too small or when the objective function is nonstationary.
def update_p_c(y_w, norm_p_sigma, params, memory):
""" Update evolution path for sigma/stepsize. """
ind_sigma_cond_left = norm_p_sigma / jnp.sqrt(
1 - (1 - params["c_sigma"]) ** (2 * (memory["generation"] + 1)))
ind_sigma_cond_right = (1.4 + 2 / (memory["mean"].shape[0] + 1)) * params["chi_d"]
ind_sigma = 1.0 * (ind_sigma_cond_left < ind_sigma_cond_right)
p_c = (1 - params["c_c"]) * memory["p_c"] + ind_sigma * jnp.sqrt((1 -
(1 - params["c_c"])**2) * params["mu_eff"]) * y_w
return p_c, ind_sigma
### Update 5: Covariance Adaptation Step
We can now use the evolution path $p_c$ for one part of the covariance matrix adaptation step: the rank-one update given by the outer product $p_c p_c^T$. The update is complemented with a rank-$\mu$ update constructed from the weighted sample covariance estimate of the most recent generation evaluation:
$C^{(g+1)} \propto (1 - c_1 - c_\mu (\sum_{i=1}^\mu w_i) + c_s) C^{(g)} + c_1 \underbrace{p_c p_c^T}_{\text{rank-one}} + c_\mu \sum_{i=1}^\mu w_i \underbrace{\frac{x_{i:\lambda} - m^{(g)}}{\sigma^{(g)}} \left(\frac{x_{i:\lambda} - m^{(g)}}{\sigma^{(g)}} \right)^T}_{\text{rank}-\min(\mu, d)}$
Intuitively, the goal of the adaptation step is to increase the chances of sampling $p_c$ and $\frac{x_{i:\lambda} - m^{(g)}}{\sigma^{(g)}}$. Again $c_1$, $c_\mu$ and $c_s$ denote a set of learning rates. For a better overview of the hyperparameters in CMA-ES check out the overview table at the end of the blog. The overall covariance matrix adaptation step looks as follows:
def update_covariance(y_k, ind_sigma, C_2, params, memory):
""" Update cov. matrix estimator using rank 1 + μ updates. """
w_io = params["weights"] * jnp.where(params["weights"] >= 0, 1,
memory["mean"].shape[0]/
(jnp.linalg.norm(C_2.dot(y_k.T), axis=0) ** 2 + 1e-20))
c_s = (1 - ind_sigma) * params["c_1"] * params["c_c"] * (2 - params["c_c"])
rank_one = jnp.outer(memory["p_c"], memory["p_c"])
rank_mu = jnp.sum(
jnp.array([w * jnp.outer(y, y) for w, y in zip(w_io, y_k)]), axis=0)
C = ((1 - params["c_1"] - params["c_mu"] * jnp.sum(params["weights"]) + c_s ) * memory["C"]
+ params["c_1"] * rank_one + params["c_mu"] * rank_mu)
return C
# Leveraging the Full Power of vmap in ES
Let’s now see how we can scale CMA-ES to optimize a small neural network for a classic Reinforcement Learning task. Traditionally, this would involve large chunks of code and communication pipelines involving multiprocessing and OpenMPI. In JAX, on the other hand, we will use vmap to take care of a lot of engineering complexity in the following ways: 1) We will jit the RL episode loop after using lax.scan & rewriting the gym environment. 2) We vmap over the evaluation episodes used to estimate the agent’s fitness. 3) Finally, we also vmap over the different proposal networks within a generation. Pictorially this looks as follows:
There are no multiprocessing worker queues involved and we can easily scale this to accelerators such as GPUs and even TPUs. I have already taken the freedom to re-write OpenAI’s Pendulum-v0 NumPy environment in JAX. For the simple pendulum-ODE case it basically boiled down to replacing all np.<op> operations with the equivalent jnp.<op> operations and to avoid the explicit use of booleans (e.g. by using masks instead). Furthermore, the RL step will now take as inputs an additional dictionary of environment variables and the current state of the environment. This allows us to jit an entire episode rollout together with the help of the lax.scan primitive. We now import the simple environment helpers. Next we define a single-hidden layer MLP policy and a policy rollout wrapper. For simplicity, we assume that the policy deterministically maps from observation to action:
from helpers.pendulum_jax import reset, step, env_params
def ffw_policy(params, obs):
""" Compute forward pass and return action from deterministic policy """
def relu_layer(W, b, x):
""" Simple ReLu layer for single sample """
return jnp.maximum(0, (jnp.dot(W, x) + b))
# Simple single hidden layer MLP: Obs -> Hidden -> Action
activations = relu_layer(params["W1"], params["b1"], obs)
mean_policy = jnp.dot(params["W2"], activations) + params["b2"]
return mean_policy
def policy_pendulum_step(state_input, tmp):
""" lax.scan compatible step transition in jax env. """
obs, state, policy_params, env_params = state_input
action = ffw_policy(policy_params, obs)
next_o, next_s, reward, done, _ = step(env_params, state, action)
carry, y = [next_o.squeeze(), next_s.squeeze(),
policy_params, env_params], [reward]
return carry, y
def pendulum_rollout(rng_input, policy_params, env_params, num_steps):
""" Rollout a pendulum episode with lax.scan. """
obs, state = reset(rng_input)
_, scan_out = jax.lax.scan(policy_pendulum_step,
[obs, state, policy_params, env_params],
[jnp.zeros(num_steps)])
# Return the sum of rewards accumulated by agent in episode rollout
return jnp.sum(jnp.array(scan_out))
Finally, it is time for the ultimate JAX magic. We vmap over both the number of different evaluation episodes and all the different neural networks in our current population. The helper v_dict indicates that the first dimension of our different dictionary parameter entries corresponds to the population dimension over which we want to vectorize. Afterwards, we jit the vmap-ed batch rollout and indicate that the environment parameters as well as the number of episode steps are static:
# vmap over different MC fitness evaluations for single pop. member
batch_rollout = jit(vmap(pendulum_rollout, in_axes=(0, None, None, None),
out_axes=0), static_argnums=(3))
# vmap over different members in the population
v_dict = {"W1": 0, "b1": 0, "W2": 0, "b2": 0}
generation_rollout = jit(vmap(batch_rollout,
in_axes=(None, v_dict, None, None),
out_axes=0), static_argnums=(3))
We need one final ingredient: When asking CMA-ES for parameter samples for the next generation, it will sample a flat vector of parameters. But the evaluation procedure requires a dictionary of layer-specific weight arrays. Hence, we need a helper for re-assembling the flat proposal vectors into the proper parameter dictionary of weights and biases for JAX. Here is a simple function that does the job for our single hidden-layer MLP:
def flat_to_network(flat_params, layer_sizes):
""" Reshape flat parameter vector to feedforward network param dict. """
pop_size = flat_params.shape[0]
W1_stop = layer_sizes[0]*layer_sizes[1]
b1_stop = W1_stop + layer_sizes[1]
W2_stop = b1_stop + (layer_sizes[1]*layer_sizes[2])
b2_stop = W2_stop + layer_sizes[2]
# Reshape params into weight/bias shapes
params = {"W1": flat_params[:, :W1_stop].reshape(pop_size,
layer_sizes[1],
layer_sizes[0]),
"b1": flat_params[:, W1_stop:b1_stop],
"W2": flat_params[:, b1_stop:W2_stop].reshape(pop_size,
layer_sizes[2],
layer_sizes[1]),
"b2": flat_params[:, W2_stop:b2_stop]}
return params
Now we are ready to put everything together into the CMA search loop for the Pendulum task and a multi-layer perceptron with 48 hidden units: We start by initialising the strategy hyperparameters, the search distribution $m^{(0)}, \sigma^{(0)}, C^{(0)}$, the evolution paths $p_\sigma, p_c$ and the logger which tracks the progress of the strategy. Afterwards, we then run the ask-evaluate-tell loop over the different generation iterations.
# Setup the ES hyperparameters
num_generations = 200
num_evals_per_gen = 20
num_env_steps = 200
pop_size, parent_size = 100, 50
# Setup the random number gen., init ES and the logger
rng = jax.random.PRNGKey(0)
net_size = [3, 48, 1]
num_params = 3*48 + 48 + 48*1 + 1
mean_init, sigma_init = jnp.zeros(num_params), 1
params, memory = init_cma_es(mean_init, sigma_init, pop_size, parent_size)
top_k = 5
evo_logger = init_logger(top_k, num_params)
# Loop over different generations in evolutionary strategy
start_t = time.time()
for g in range(num_generations):
rng, rng_ask, rng_eval = jax.random.split(rng, 3)
# Ask for set of proposal param candidates and reshape
generation_params = flat_to_network(x, net_size)
rollout_keys = jax.random.split(rng_eval, num_evals_per_gen)
# Evaluate generation population on pendulum task - min cost!
population_returns = generation_rollout(rollout_keys, generation_params,
env_params, num_env_steps)
values = - population_returns.mean(axis=1)
# Tell the results and update the strategy + logger
memory = tell(x, values, params, memory)
evo_logger = update_logger(evo_logger, x, values, memory, top_k)
if (g+1) in [25, 50, 75, 150]:
jnp.save("gen_" + str(g+1) + ".npy", evo_logger[])
if (g + 1) % 15 == 0:
print("# Generations: {} | Fitness: {:.2f} | Cum. Time: {:.2f}".format(g+1, evo_logger["top_values"][0],
time.time()-start_t))
if check_termination(values, params, memory):
break
# Generations: 15 | Fitness: 923.41 | Cum. Time: 9.43
# Generations: 30 | Fitness: 318.41 | Cum. Time: 13.25
# Generations: 45 | Fitness: 318.41 | Cum. Time: 16.62
# Generations: 60 | Fitness: 269.11 | Cum. Time: 20.07
# Generations: 75 | Fitness: 197.12 | Cum. Time: 23.36
# Generations: 90 | Fitness: 165.14 | Cum. Time: 26.74
# Generations: 105 | Fitness: 138.38 | Cum. Time: 30.10
# Generations: 120 | Fitness: 136.69 | Cum. Time: 33.19
# Generations: 135 | Fitness: 136.69 | Cum. Time: 36.39
# Generations: 150 | Fitness: 136.69 | Cum. Time: 39.50
# Generations: 165 | Fitness: 136.69 | Cum. Time: 42.76
# Generations: 180 | Fitness: 131.25 | Cum. Time: 45.95
# Generations: 195 | Fitness: 123.69 | Cum. Time: 49.10
# Plot the results
fig, axs = plt.subplots(1, 2, figsize=(15, 4))
plot_fitness(evo_logger, title="Evolved Pendulum MLP - Performance", ylims=(90, 1600), fig=fig, ax=axs[0])
plot_sigma(evo_logger, title="Evolved Pendulum MLP - Stepsize", ylims=(0.8, 1.3), fig=fig, ax=axs[1])
In the left plot above we see that the strategy is capable of solving the Pendulum task (fitness score of ca. 120) in less than 50 seconds (on a 2,7 GHz Quad-Core Intel Core i7 chip). The plot shows the overall best performing solution (“Top 1”) found so far as well as the mean of the top 5 solutions (“Top-k Mean”) across the search iterations. In total we have gone through 200 generations with 100 networks each and evaluated each single one on 50 episodes with 200 sequential steps. A total of 200 million step transitions and 200 network sampling/covariance adaptation steps in less than a minute on a standard CPU. And this even included jit-compilation time. Pretty dope if you ask me. Let’s now take a look at the evolved behaviors at different stages of the evolutionary trajectory:
## A Hyperspace Odyssey for CMA-ES
Something I like to do when there is some extra compute lying around is to figure out when things break. This for me means running grid searches and building some intuition about the ‘white-box’ dynamics of our algorithm, which may not be directly visible just from starring at the update equations. I was particularly interested in how CMA-ES would scale to larger networks, how the truncation selection ($\lambda > \mu$) would affect the performance as well as how much wiggle room there is with the mean learning rate ($c_m$). Therefore, I ran the same configuration as above but changed the individual parameters. In order to get a little more statistical power, I repeated the experiment over 10 different seeds. In the figure below you can see how the performance varied across these 3 parameters (smaller is better):
1. Left: We train simple feedforward policies with a single hidden layer and different hidden dimensions. The final cumulative cost after the adaptive search increases as we increase the capacity of the network. The estimation of $C$ becomes harder and the performance drops. This may get better when increasing the population size at constant absolute truncation strength.
2. Middle: The optimal truncation selection for an overall population of 100 is approximately 50%. Less or more elitism decreases the performance. On the one hand, you want to aggressively exploit the newest information of the generation. On the other hand, there is some risk associated with the limited evaluation on only 20 episodes.
3. Right: We can choose a fairly large mean learning rate without impairing the performance of the CMA-ES. While the tutorial by Hansen (2016) suggests setting it smaller than 1, we don’t see significant performance drops for larger learning rate. This may be due to both our policy as well as objective being deterministic.
## Comparing Single Generation Runtimes across Devices
And last but not least we have to see how hard we can scale this on GPUs. In the figure below you find a runtime (left) and compile time (right) benchmark for a single ask-eval-tell iteration on the Pendulum task and for different population sizes (and 50 MC fitness evaluations):
The measured times are averaged over 1000 iterations and were obtained on three different devices. We see an increase in time per generations as the population size is increased. Both GPUs, on the other hand, handle the increased population size easily using simple parallelization. But this appears to come with a small caveat: Increased XLA compile time. I am not sure what is going on here but given that you only have to compile once at the first function call, this seems negligible.
# Conclusion & Final Pointers
In this post we learned about the CMA evolutionary strategy and experienced how the power of the jit, vmap and lax.scan combo is capable of scaling neuroevolution methods. With two lines of code we were able to vectorize over both stochastic fitness evaluations and population members. The power of XLA then allowed us to run the entire ask-evaluate-tell procedure on accelerated hardware. Ultimately, these are the types of soft- and hardware developments, which enable new types of research (reminiscent of arguments in the ‘Hardware Lottery’ by Sara Hooker) and potentially revive forgotten techniques. Here we investigated one type of gradient-free optimization, which can free us from the requirement of having to use differentiable functions everywhere. E.g. we can use spikes (like the brain does!). In the future, we may dream of entirely different types of neural net architectures, which leverage scalable random search/0-order methods.
Finally, the entire code of this tutorial can be found in this repository ready for you to dive into. Furthermore, I put the entire pipeline into a 120 line snippet. I would love to get some feedback ⭐ CMA-ES obviously is not the only ES out there. So if you want to know more about different ES or get a different point of view on CMA-ES, I recommend checking out the following links:
I want to thank Joram Keijser for helping me make this blog a better version of itself and for being my buddy in JAX-love crime.
## Extra: An Overview of Hyperparameters for CMA-ES
We have only glanced over the specifics of how to initialize/set the different parameter initialisation of CMA-ES. This is mainly because I wanted to focus on the high-level intuition and not drown you too much in math. If you want to learn more about how the nitty details I recommend checking out the tutorial by Hansen (2016). Below I have collected a table of key hyperparameters and how they are set in the reference tutorial:
Parameter Description Value Range $\lambda$ Population size $\lambda \geq 2$ $\mu$ Parent number/"Elite" $\mu \leq \lambda$ $w_i$ Recombination weights $w_i \geq 0, i \leq \mu$ $\mu_{eff}$ Variance effective selection mass for mean $\sigma^{(0)}$ Stepsize $\mathbb{R}_+$ $c_c$ LRate for cumulation for rank-one update $\leq 1$ $c_1$ LRate for rank-one update $\leq 1 - c_\mu$ $c_\mu$ LRate for rank-$\mu$ update $\leq 1 - c_1$ $c_\sigma$ Lrate for cumulation for stepsize update $\leq 1$ $d_\sigma$ Damping parameter stepsize update $\leq 1$
Tags: |
My Math Forum Probability With Marbles Without Replacement
Probability and Statistics Basic Probability and Statistics Math Forum
January 4th, 2016, 01:43 PM #1 Senior Member Joined: Oct 2013 From: New York, USA Posts: 606 Thanks: 82 Probability With Marbles Without Replacement There are 17 green marbles and 3 red marbles. When 4 marbles are selected without replacement, what is the probability of every combination of green and red marbles? P(0 green and 4 red) = 0 P(1 green and 3 red) = x P(2 green and 2 red) = y P(3 green and 1 red) = z P(4 green and 0 red) = (17/20)(16/19)(15/18)(14/17) = 28/57 = about 0.4912280702 Obviously x + y + z = 29/57.
January 4th, 2016, 05:29 PM #2 Math Team Joined: Jan 2015 From: Alabama Posts: 3,240 Thanks: 884 Given 17 green and 3 red marbles, the probability that the first marble chosen is green is 17/20. In that case there are 16 green and 3 red marbles so the probability the second marble is also green is 16/19. Then there are 15 green and 3 red marbles so the probability the third marbles is also green is 15/18. Then there are 14 green and 3 red marbles so the probability the fourth marble is red is 3/17. The probability the marbles are "green, green, green, red" is (17/20)(16/19)(15/1(3/17). The same kind of argument shows that the probability of three green and one red in any specific order is also (17/20)(16/19)(15/1(3/17). There are 4!/(3!1!)= 4 different orders ("GGGR", "GGRG", "GRGG", and "RGGG") so the probability of "three green, one red" in any order is 4(17/20)(16/19)(15/1(3/17). The others can be done in the same way.
January 5th, 2016, 05:14 AM #3 Senior Member Joined: Apr 2015 From: Planet Earth Posts: 129 Thanks: 25 There is a mathematical function called a “combination.” It requires two arguments: the number of marbles you have, N; and the number you draw without replacement, M. It is: comb(N,M)=N!/M!/(N-M)! What it tells you, is the number of ways you accomplish that draw. So when you choose 4 marbles from your bag of 20, there are comb(20,4)=20!/4!/16!=4845 ways to do it. But you are interested in specific arrangements. To get, say, 2 red marbles (of the 3) and 2 green marbles (of the 17), there are comb(3,2)*comb(17,2)=3*136=408 ways. If you do this for each possibility, comb(3,0)*comb(17,4) = 1*2380 = 2380 comb(3,1)*comb(17,3) = 3*680 = 2040 comb(3,2)*comb(17,2) = 3*136 = 408 comb(3,3)*comb(17,1) = 1*17 = 17 As a check, 2380+680+136+17=4845, the number of total possible combinations. So the probabilities are Pr(0 red, 4 green) = 2380/4845 = 28/57 Pr(1 red, 3 green) = 2040/4845 = 8/19 Pr(2 red, 2 green) = 408/4845 = 8/95 Pr(3 red, 1 green) = 17/4845 = 1/285 Thanks from EvanJ
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post EvanJ Probability and Statistics 4 December 24th, 2015 06:13 AM EvanJ Advanced Statistics 1 March 19th, 2014 06:55 AM shunya Algebra 8 March 6th, 2014 05:35 PM Ataloss Advanced Statistics 3 March 24th, 2012 12:55 PM mjm5937 Advanced Statistics 5 February 3rd, 2012 11:46 PM
Contact - Home - Forums - Cryptocurrency Forum - Top |
Episode 5 of the Manly Hanley Podcast is live! I’ve attached the show notes below. Thank you for listening!
Intro
In Episode 5 of the Manly Hanley Podcast, Randy discusses why he decided that it’s Time for Nintendo. These are reasons why Nintendo may be best the best choice for the casual gamer that is reentering the gaming scene, without dedicating too much time toward it.
Announcements
• Randy is hard-at-work with some different vendors, looking into possible sponsorship and giveaways. Another giveaway will be coming soon, likely within the next couple of weeks!
Nintendo Switch being a worthwhile investment
• Nintendo can be good for any age, as I feel it allows for healthier video gaming habits.
• You never really have to turn it off, it just allows you to put it into sleep mode, perfect when you’re a busy person that casually wants to keep up on video gaming.
• It allows you to be a “Mobile Gamer” with it’s portability. . .
• Things are consistent across Nintendo games
• Co-Op capability seems to be common among many Nintendo titles. Super Smash Bros – You can have 8 characters playing simultaneously.
• It’s less of an investment in the long-run for a Nintendo switch online Membership. Nintendo online has a family Membership option where you can pay $35 for the year, but share it with 8 different Nintendo accounts!…between 8 people comes out to just over$4 a year! |
# Four charges are arranged at the corners of a square ABCD, as shown in the adjoining figure. The force on the charge kept at the centre O is: 1. Zero 2. Along the diagonal AC 3. Along the diagonal BD 4. Perpendicular to side AB
Subtopic: Coulomb's Law |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot
In the absence of other conductors, the surface charge density
(1) Is proportional to the charge on the conductor and its surface area
(2) Inversely proportional to the charge and directly proportional to the surface area
(3) Directly proportional to the charge and inversely proportional to the surface area
(4) Inversely proportional to the charge and the surface area
Subtopic: Electric Field |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot
Out of gravitational, electromagnetic, Vander Waals, electrostatic and nuclear forces; which two are able to provide an attractive force between two neutrons
(1) Electrostatic and gravitational
(2) Electrostatic and nuclear
(3) Gravitational and nuclear
(4) Some other forces like Vander Waals
Subtopic: Coulomb's Law |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot
Three charges 4q, Q, and q are in a straight line in the position of 0, l/2, and l respectively. The resultant force on q will be zero if Q =
(1) – q
(2) –2q
(3) $-\frac{q}{2}$
(4) 4q
Subtopic: Coulomb's Law |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot
Two small spheres each having the charge +Q are suspended by insulating threads of length L from a hook. If this arrangement is taken in space where there is no gravitational effect, then the angle between the two suspensions and the tension in each will be:
1. ${180}^{o},\text{\hspace{0.17em}}\frac{1}{4\pi {\epsilon }_{0}}\frac{{Q}^{2}}{{\left(2L\right)}^{2}}$
2. ${90}^{o},\text{\hspace{0.17em}}\frac{1}{4\pi {\epsilon }_{0}}\frac{{Q}^{2}}{{L}^{2}}$
3. ${180}^{o},\text{\hspace{0.17em}}\frac{1}{4\pi {\epsilon }_{0}}\frac{{Q}^{2}}{2{L}^{2}}$
4. ${180}^{o},\text{\hspace{0.17em}}\frac{1}{4\pi {\epsilon }_{0}}\frac{{Q}^{2}}{{L}^{2}}$
Subtopic: Coulomb's Law |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot
Two charges each of 1 coulomb are at a distance 1 km apart, the force between them is
(1) 9 × 103 Newton
(2) 9 × 10–3 Newton
(3) 1.1 × 10–4 Newton
(4) 104 Newton
Subtopic: Coulomb's Law |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot
Two charges +2 C and +6 C are repelling each other with a force of 12 N. If each charge is given –2 C of charge, then the value of the force will be:
1. 4 (Attractive)
2. 4 N (Repulsive)
3. 8 N (Repulsive)
4. Zero
Subtopic: Coulomb's Law |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot
The dielectric constant of pure water is 81. Its permittivity will be
(1) 6.91 × 10–10 MKS units
(2) 8.86 × 10–12 MKS units
(3) 1.02 × 1013 MKS units
(4) Cannot be calculated
Subtopic: Coulomb's Law |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot
Force of attraction between two point charges Q and – Q separated by d meter is Fe. When these charges are given to two identical spheres of radius R = 0.3 d whose centres are d meter apart, the force of attraction between them is
(1) Greater than Fe
(2) Equal to Fe
(3) Less than Fe
(4) None of the above
Subtopic: Coulomb's Law |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot
One metallic sphere A is given a positive charge whereas another identical metallic sphere B of the exact same mass as of A is given an equal amount of negative charge. Then:
(1) mass of A and mass of B are the same.
(2) mass of A is more.
(3) mass of B is less.
(4) mass of B is more.
Subtopic: Electric Charge |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh |
# Difference between revisions of "Branch and cut"
Author: Lindsay Siegmundt, Peter Haddad, Chris Babbington, Jon Boisvert, Haris Shaikh (SysEn 6800 Fall 2020)
Steward: Wei-Han Chen, Fengqi You
## Introduction
The Branch and Cut is a methodology that is used to optimize linear problems that are integer based. This concept is comprised of two known optimization methodologies - branch and bound and cutting planes. Utilizing these tools allows for the Branch and Cut to be successful by increasing the relaxation and decreasing the lower bound. The ultimate goal of this technique is to minimize the amount of nodes.
## Methodology & Algorithm
### Algorithm - Peter
Branch and Cut for is a variation of the Branch and Bound algorithm. Branch and Cut incorporates Gomery cuts allowing the search space of the given problem. The standard Simplex Algorithm will be used to solve each Integer Linear Programming Problem (LP).
Below is an Algorithm to utilize the Branch and Cut algorithm with Gomery cuts and Partitioning:
#### Step 0:
```Upper Bound = ∞
Lower Bound = -∞
```
#### Step 1. Initialize:
Set the first node as ${\displaystyle LP_{0}}$ while setting the active nodes set as ${\displaystyle L}$. The set can be accessed via ${\displaystyle LP_{n}}$
#### Step 3. Iterate through list L:
While ${\displaystyle L}$ is not empty (i is the index of the list of L), then:
##### Solve 3.2.
Solve for the Relaxed
##### Step 3.3.
```If Z is infeasible:
else:
Continue with solution Z.
```
Step 3.2. Cutting Planes:
```If a cutting plane is found:
Else:
Continue.
```
#### Step 4. Pruning and Fathoming:
```If Z^l >= Z:
```
```If Z^l <= Z AND X_i is an integral feasible:
Z = Z^i
Remove all Z^i from Set(L)
```
#### Step 5. Partition (reference EXTERNAL SOURCE)
Let ${\displaystyle D_{j=1}^{lj=k}}$ be a partition of the constraint set ${\displaystyle D}$ of problem ${\displaystyle LP_{l}}$. Add problems ${\displaystyle D_{j=1}^{lj=k}}$ to L, where is with feasible region restricted to and for j=1,...k is set to the value of for the parent problem l. Go to step 3. |
## What's the Square's Side?
All angles in the figure below are right angles, and the colored region has an area of $21.$
What is the value of $x?$ Keep reading to find out. |
The Federal Government provides two invoice receipt platforms for the transmission of e-invoices to federal public-sector customers:
ZRE for electronic invoices to direct Federal Administration institutions (e.g. ministries and higher federal authorities).
OZG-RE for electronic invoices to entities of the indirect Federal Administration (e.g. independent entities that have taken over federal tasks).
The invoice recipients will let you know via which invoice receipt platform they can be reached.
In order to use an invoice receipt platform, you must register there (create a user account) and have the desired transmission channels activated. |
# Bivariant algebraic K-Theory
Research paper by Guillermo Cortiñas, Andreas Thom
Indexed on: 23 Apr '07Published on: 23 Apr '07Published in: Mathematics - K-Theory and Homology
#### Abstract
We show how methods from K-theory of operator algebras can be applied in a completely algebraic setting to define a bivariant, matrix-stable, homotopy-invariant, excisive K-theory of algebras over a fixed unital ground ring H, kk_*(A,B), which is universal in the sense that it maps uniquely to any other such theory. It turns out kk is related to C. Weibel's homotopy algebraic K-theory, KH. We prove that, if H is commutative and A is central as an H-bimodule, then kk_*(H,A)=KH_*(A). We show further that some calculations from operator algebra KK-theory, such as the exact sequence of Pimsner-Voiculescu, carry over to algebraic kk. |
Wednesday, February 27, 2008
Today's dirty command-line trick: progress statistics for dd
Who has already been using dd for endless data transfer without really knowing what is happening ? That is rather annoying. Sure, when the target is a normal file, you can always use
watch ls -l target
But, that won't tell you the rate of transfer, and it won't work if the target is a special file (device, pipe, ...). Fortunately, dd accepts the USR1 signal to print out some small statistics... Well, combining that with the watch tricks gives this:
watch killall -USR1 dd
And there you go, dd is now regularly printing out statistics. Pretty neat, isn't it ?
waveblaster said...
You can pipe DD through PV (process viewer) for similiar function.
I.E.
dd if=/dev/zero | pv | dd of=/tmp/demo bs=1M
Vincent Fourmond said...
That is very good, if you remembered about it before starting the lengthy dd ;-)... |
Now showing items 1514-1532 of 1701
• #### 1919 - Statistical and Computational Aspects of Learning with Complex Structure
[OWR-2019-22] (2019) - (05 May - 11 May 2019)
The recent explosion of data that is routinely collected has led scientists to contemplate more and more sophisticated structural assumptions. Understanding how to harness and exploit such structure is key to improving the ...
• #### 1339 - Statistical Inference for Complex Time Series Data
[OWR-2013-48] (2013) - (22 Sep - 28 Sep 2013)
During recent years the focus of scientific interest has turned from low dimensional stationary time series to nonstationary time series and high dimensional time series. In addition new methodological challenges are coming ...
• #### 1811 - Statistical Inference for Structured High-dimensional Models
[OWR-2018-12] (2018) - (11 Mar - 17 Mar 2018)
High-dimensional statistical inference is a newly emerged direction of statistical science in the 21 century. Its importance is due to the increasing dimensionality and complexity of models needed to process and understand ...
• #### 1004 - Statistical Issues in Prediction: what can be learned for individualized predictive medicine?
[OWR-2010-6] (2010) - (24 Jan - 30 Jan 2010)
Error is unavoidable in prediction. And it is quite common, often sizable, and usually consequential. In a clinical context, especially when dealing with a terminal illness, error in prediction of residual life means that ...
• #### 1925b - Statistical Methodology and Theory for Functional and Topological Data
[OWR-2019-28] (2019) - (16 Jun - 22 Jun 2019)
The workshop focuses on the statistical analysis of complex data which cannot be represented as realizations of finite-dimensional random vectors. An example of such data are functional data. They arise in a variety of ...
• #### 1712 - Statistical Recovery of Discrete, Geometric and Invariant Structures
[OWR-2017-16] (2017) - (19 Mar - 25 Mar 2017)
The main objective of the workshop was to bring together researchers in mathematical statistics and related areas in order to discuss recent advances and problems associated with statistical recovery of geometric and ...
• #### Statistics and dynamical phenomena
[SNAP-2014-006-EN] (Mathematisches Forschungsinstitut Oberwolfach, 2014)
A friend of mine, an expert in statistical genomics, told me the following story: At a dinner party, an attractive lady asked him, "What do you do for a living?" He replied, "I model." As my friend is a handsome man, the ...
• #### 1804 - Statistics for Data with Geometric Structure
[OWR-2018-3] (2018) - (21 Jan - 27 Jan 2018)
Statistics for data with geometric structure is an active and diverse topic of research. Applications include manifold spaces in directional data or symmetric positive definite matrices and some shape representations. But ...
• #### 1627a - Statistics for Shape and Geometric Features
[OWR-2016-32] (2016) - (03 Jul - 09 Jul 2016)
The constant emergence of novel technologies result in novel data generating devices and mechanisms that lead to a prevalence of highly complex data. To analyze such data, novel statistical methodologies need to be developed. ...
• #### 0403 - Statistics in Finance
[OWR-2004-2] (2004) - (11 Jan - 17 Jan 2004)
• #### 0542 - Statistische und Probabilistische Methoden der Modellwahl
[OWR-2005-47] (2005) - (16 Oct - 22 Oct 2005)
Aim of this conference with more than 50 participants, was to bring together leading researchers from roughly three different scientific communities who work on the same issue, data based model selection. Their different ...
• #### Stein's method for dependent random variables occuring in statistical mechanics
[OWP-2009-09] (Mathematisches Forschungsinstitut Oberwolfach, 2009-03-03)
We obtain rates of convergence in limit theorems of partial sums $S_n$ for certain sequences of dependent, identically distributed random variables, which arise naturally in statistical mechanics, in particular, in the ...
• #### Steinberg groups for Jordan pairs
[OWP-2011-29] (Mathematisches Forschungsinstitut Oberwolfach, 2011-05-26)
We introduce categories of groups with commutator relations with respect to root groups and Weyl elements, permuting the root groups. This allows us to view the classical Steinberg groups, for example the Steinberg group ...
• #### 0823 - Stochastic Analysis
[OWR-2008-25] (2008) - (01 Jun - 07 Jun 2008)
• #### 9944 - Stochastic Analysis
[TB-1999-43] (1999) - (31 Oct - 06 Nov 1999)
• #### 0244 - Stochastic Analysis
[TB-2002-49] (2002) - (27 Oct - 02 Nov 2002)
• #### 1122 - Stochastic Analysis
[OWR-2011-29] (2011) - (29 May - 04 Jun 2011)
The meeting took place on May 30-June 3, 2011, with over 55 people in attendance. Each day had 6 to 7 talks of varying length (some talks were 30 minutes long), except for Thursday: the traditional hike was moved to Thursday ...
• #### 0519 - Stochastic Analysis and Non-Classical Random Processes
[OWR-2005-23] (2005) - (08 May - 14 May 2005)
The workshop focused on recent developments in the theory of stochastic processes and flows, with special emphasis on emerging new classes of processes, as well as new objects whose limits are expected to coincide with ...
• #### 1104 - Stochastic Analysis in Finance and Insurance
[OWR-2011-6] (2011) - (23 Jan - 29 Jan 2011)
This workshop brought together leading experts and a large number of younger researchers in stochastic analysis and mathematical finance from all over the world. During a highly intense week, participants exchanged during ... |
Search
>> Home >> Resources & support >> FAQs >> Pseudo-R2 for tobit
Why is the pseudo-R2 for tobit negative or greater than one?
Title Pseudo-R2 for tobit Author William Sribney, StataCorp Date June 1997
Concerning the pseudo-R2, we use the formula
pseudo-R2 = 1 − L1/L0
where L0 and L1 are the constant-only and full model log-likelihoods, respectively.
For discrete distributions, the log likelihood is the log of a probability, so it is always negative (or zero). Thus 0 ≥ L1L0, and so 0 ≤ L1/L0 ≤ 1, and so 0 ≤ pseudo-R2 ≤1 for DISCRETE distributions.
For continuous distributions, the log likelihood is the log of a density. Since density functions can be greater than 1 (cf. the normal density at 0), the log likelihood can be positive or negative. Similarly, mixed continuous/discrete likelihoods like tobit can also have a positive log likelihood.
If L1 > 0 and L0 < 0, then L1/L0 < 0, and 1 − L1/L0 > 1.
If L1 > L0 > 0 and then L1/L0 > 1, and 1 − L1/L0 < 0.
Hence, this formula for pseudo-R2 can give answers > 1 or < 0 for continuous or mixed continuous/discrete likelihoods like tobit. So, it makes no sense.
For many models, including tobit, the pseudo-R2 has no real meaning.
This formula for pseudo-R2 is nothing more than a reworking of the model chi-squared, which is 2(L1L0). Thus even for discrete distributions where 0 ≤ pseudo R2 ≤ 1, it is still better to report the model chi-squared and its p-value—not the pseudo-R2. |
Where communities thrive
• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
Activity
Jonas Ulrich
@julrich
("has the missing pattern inside" in this context meaning: It has relPath: 01-pages/03-amet/01-amet.hbs in there. The pseudo patterns would follow, if processIterative for that pattern would be called, which it is not :/)
Brian Muenzenmeyer
@bmuenzenmeyer
i have been slowly incorporating this into fix/watched-pattern-changes
@mbulfair awesome!
Jonas Ulrich
@julrich
@mbulfair Definitely interested, we're still wrapping Pattern Lab with our own build process right now
(including Gulp and Webpack)
Matthew Bulfair
@mbulfair
I’m just testing out the SCSS and CSS updates for my internal version, I may release those addon’s as a file, I keep getting so many requests for it.
But there’s better alternatives now
Jonas Ulrich
@julrich
@bmuenzenmeyer A quick question I thought about yesterday: What would be the minimal way to realize a workshop / storefront example with the current alpha, where the only difference is in the states that are rendered (excludedPatternStates). Will I still have to create my own uikit-fork of workshop or your bare example? Or is there a quicker way?
Brian Muenzenmeyer
@bmuenzenmeyer
the quickest way would be to fork uikit-workshop as-is
since its completely functional
change the name and config and you should be off to the races
Jonas Ulrich
@julrich
Ah yea, btw. our own plugin broke with the upgrade to the current alpha. Something about fs path resolution for a folder called "plugin-", which doesn't work. Had no time to look at that, yet, either :/ Maybe some convetion(s) changed? Saw the comments about postinstall, but we've found a workaround for that
Cool, sounds great! Will toy with that, too :)
Thanks a lot for your (continuing) work, btw!
Brian Muenzenmeyer
@bmuenzenmeyer
can you add a note about the plugins breaking to pattern-lab/patternlab-node#812 ?
Jonas Ulrich
@julrich
Yep
Brian Muenzenmeyer
@bmuenzenmeyer
i tabled some changes to plugins, but it sounds like they broke anyway. i think i know where
and thanks
Josh Schneider
@Josh68
and thanks for the additional info the other day regarding the assetWatcher
I missed this. Guess maybe that comment was directed at me. Now that Matt's close to a Webpack v4 iteration, I will look again at the PL3 integration. I'll start by pulling in the branch with the modified server.js. But I'd like to take some time to look at Matt's changes, too.
Jonas Ulrich
@julrich
Am I right in assuming that uikit-workshop is an evolution of styleguidekit-assets-default? We already have one of those customized for our needs. Anything to watch out for when just using that? Are there known differences?
Looks mostly the same, but there are additional / new files like clipboard.min.js in uikit-workshop.
Brian Muenzenmeyer
@bmuenzenmeyer
it was at one time a port of styleguidekit-assets-default, yes
Jonas Ulrich
@julrich
Okay, will diff them, then :)
Matthew Bulfair
@mbulfair
@bmuenzenmeyer @Josh68 I am going to be pushing in a branch on the webpack edition the webpack 4 upgrade. Would either you mind giving it a spin, make sure I didn't miss something? I'll let you know soon when I do the push
Josh Schneider
@Josh68
@mbulfair @bmuenzenmeyer, I will definitely be checking this out when you push it, as my time permits.
Brian Muenzenmeyer
@bmuenzenmeyer
i just opened this
pattern-lab/patternlab-node#859
Matthew Bulfair
@mbulfair
@Josh68 https://github.com/Comcast/patternlab-edition-node-webpack/tree/latest I have not spent time updated any of the documentation, so any new things I've added aren't there. Some highlights are I added a sample addon file for those wanting to do SCSS loader and extraction. It's a highly requested question. I added some new options in the pl-config to easily make changes. And you can now on demand in build/serve clear the public folder, no matter what is in the config.
Josh Schneider
@Josh68
nice
if I have anything I think is worthwhile adding to docs for thick-skulled folks like me, I'll suggest
Matthew Bulfair
@mbulfair
The only issue I am seeing is on MAC, after you change a .mustache, the build is run, but it’s getting stuck..
As if it’s not telling webpack it’s completed
Josh Schneider
@Josh68
working on fixing some issues with package.json and the lock file. think you were working with yarn and forgot some updates for npm
indeed
will figure out what min versions to change (at least copy-webpack-plugin) and to a PR. Once I did everything fresh, ran without error
Matthew Bulfair
@mbulfair
Seems now, on windows I 'm seeing the same thing, it just hangs. No errors. Need more testers
Matthew Bulfair
@mbulfair
@Josh68 @bmuenzenmeyer I’ve pushed updates to https://github.com/Comcast/patternlab-edition-node-webpack/tree/latest it works now on MAC/PC, just make sure you clear node_modules to get the webpack 4 supported versions.
Josh Schneider
@Josh68
I've verified on my local (which is actually linux, today)
Matthew Bulfair
@mbulfair
My goal is to get as many people as possible to validate this before I release it, which I would like to do wednesday.
Josh Schneider
@Josh68
issue was in package-lock.json
GuillaumeASENT
@GuillaumeASENT
Hi. I want to know if there is a solution to async css files with paternlab. I used version 2.6.0. I've a file named getAssets.functions.php which send css element in header but i want to know if there is a solution to separate differents files (boostrap first, then all css used by molecules and organisms) without the website who charge a blank page during the time css was charged (I try to make script defer with call of css file in this case). Thanks in advance for the help.
Mario Hernandez
@mariohernandez
I just started using the node version of Pattern Lab within a Drupal theme and when I try to use my drupal path to load pattern lab (i.e. drupal-local-domain/themes/custom/my_theme/patternalb), I get the following error message in my console: Uncaught (in promise) Error: Loading chunk 2 failed.
(error: http://drupal-local-domainstyleguide/js/2-chunk-e309c72e0e8f5783df94.js)
at HTMLScriptElement.onScriptComplete (patternlab-viewer.js:115).
Pattern Lab loads properly on its own under localhost but need to be able to load it as part of my drupal site for server testing purposes. I'd appreciate your input or some guidance.
Flywall
@Flywall
What's new with with the integration of Twig in Pattern Lab 3 (node) ?
A Twig for Js version exist...
Ringo De Smet
@ringods
@Flywall I recently updated @pattern-lab/engine-twig with the Twing library (https://nightlycommit.github.io/twing/). You can try it since version 5.8.0 of that package. There are still some issues with it, which I hope to fix soon.
Ringo De Smet
@ringods
Unbound Web Design
@JaiDoubleU
Hi All,
I've got a patternlab node implementation that uses a custom bootstrap theme that includes light and dark versions. Is there any way of leveraging the light/dark theme switcher in patternlab to toggle between a light and a dark stylesheets I've proven I can toggle between light and dark stylesheets using a value in my data.json, but I'd like to be able use the currently selected patternlab theme. Any thoughts or ideas on how I might do this would be greatly appreciated.
shafqat-ali-arekibo
@shafqat-ali-arekibo
Hi All
I need few minutes from your precious time to help me, I am getting the following warning
A pattern file: brand\global\organisms\text\headings.hbs was found greater than 3 levels deep from ./source/_patterns/.
It's strongly suggested to not deviate from the following structure under _patterns/
[patternGroup]/[patternSubgroup]/[patternName].[patternExtension]
or
[patternGroup]/[patternSubgroup]/[patternName]/[patternName].[patternExtension]
Is there anyway to go above 3 level deep from ./source/_patterns/?
i.e [patternGroup]/[patternSubgroup]/[patternSubgroup]/[patternSubgroup]/[patternName].[patternExtension]
2 replies |
Illustrative Mathematics
Content Standards: High School
hs_standards_nav_table fragment rendered at 2013-05-18 17:34:39 +0000 hs_standards_body fragment rendered at 2013-05-18 17:34:39 +0000
### Illustrated Standards
• #### Extend the properties of exponents to rational exponents. N-RN: Extend the properties of exponents to rational exponents.
1. Explain how the definition of the meaning of rational exponents follows from extending the properties of integer exponents to those values, allowing for a notation for radicals in terms of rational exponents. For example, we define $5^{1/3}$ to be the cube root of $5$ because we want $(5^{1/3})^3 = 5^{(1/3)3}$ to hold, so $(5^{1/3})^3$ must equal $5$.
2. Rewrite expressions involving radicals and rational exponents using the properties of exponents.
• #### Use properties of rational and irrational numbers. N-RN: Use properties of rational and irrational numbers.
1. Explain why the sum or product of two rational numbers is rational; that the sum of a rational number and an irrational number is irrational; and that the product of a nonzero rational number and an irrational number is irrational.
• #### Reason quantitatively and use units to solve problems. N-Q: Reason quantitatively and use units to solve problems.
1. Use units as a way to understand problems and to guide the solution of multi-step problems; choose and interpret units consistently in formulas; choose and interpret the scale and the origin in graphs and data displays. ${}^{\huge\star}$
2. Define appropriate quantities for the purpose of descriptive modeling. ${}^{\huge\star}$
3. Choose a level of accuracy appropriate to limitations on measurement when reporting quantities. ${}^{\huge\star}$
• #### Perform arithmetic operations with complex numbers. N-CN: Perform arithmetic operations with complex numbers.
1. Know there is a complex number $i$ such that $i^2 = -1$, and every complex number has the form $a + bi$ with $a$ and $b$ real.
2. Use the relation $i^2 = -1$ and the commutative, associative, and distributive properties to add, subtract, and multiply complex numbers.
3. $(+)$ Find the conjugate of a complex number; use conjugates to find moduli and quotients of complex numbers.
• #### Represent complex numbers and their operations on the complex plane. N-CN: Represent complex numbers and their operations on the complex plane.
1. $(+)$ Represent complex numbers on the complex plane in rectangular and polar form (including real and imaginary numbers), and explain why the rectangular and polar forms of a given complex number represent the same number.
2. $(+)$ Represent addition, subtraction, multiplication, and conjugation of complex numbers geometrically on the complex plane; use properties of this representation for computation. For example, $(-1 + \sqrt{3} i)^3 = 8$ because $(-1 + \sqrt3 i)$ has modulus $2$ and argument $120^\circ$.
3. $(+)$ Calculate the distance between numbers in the complex plane as the modulus of the difference, and the midpoint of a segment as the average of the numbers at its endpoints.
• #### Use complex numbers in polynomial identities and equations. N-CN: Use complex numbers in polynomial identities and equations.
1. Solve quadratic equations with real coefficients that have complex solutions.
2. $(+)$ Extend polynomial identities to the complex numbers. For example, rewrite $x^2 + 4$ as $(x + 2i)(x - 2i)$.
3. $(+)$ Know the Fundamental Theorem of Algebra; show that it is true for quadratic polynomials.
• #### Represent and model with vector quantities. N-VM: Represent and model with vector quantities.
1. $(+)$ Recognize vector quantities as having both magnitude and direction. Represent vector quantities by directed line segments, and use appropriate symbols for vectors and their magnitudes (e.g., $\textbf{v}$, $|\textbf{v}|$, $||\textbf{v}||$, $v$).
2. $(+)$ Find the components of a vector by subtracting the coordinates of an initial point from the coordinates of a terminal point.
3. $(+)$ Solve problems involving velocity and other quantities that can be represented by vectors.
• #### Perform operations on vectors. N-VM: Perform operations on vectors.
1. $(+)$ Add and subtract vectors.
1. Add vectors end-to-end, component-wise, and by the parallelogram rule. Understand that the magnitude of a sum of two vectors is typically not the sum of the magnitudes.
2. Given two vectors in magnitude and direction form, determine the magnitude and direction of their sum.
3. Understand vector subtraction $\textbf{v} - \textbf{w}$ as $\textbf{v} + (-\textbf{w})$, where $-\textbf{w}$ is the additive inverse of $\textbf{w}$, with the same magnitude as $\textbf{w}$ and pointing in the opposite direction. Represent vector subtraction graphically by connecting the tips in the appropriate order, and perform vector subtraction component-wise.
2. $(+)$ Multiply a vector by a scalar.
1. Represent scalar multiplication graphically by scaling vectors and possibly reversing their direction; perform scalar multiplication component-wise, e.g., as $c(v_x, v_y) = (cv_x, cv_y)$.
2. Compute the magnitude of a scalar multiple $c\textbf{v}$ using $||c\textbf{v}|| = |c|v$. Compute the direction of $c\textbf{v}$ knowing that when $|c|{v} \neq 0$, the direction of $c\textbf{v}$ is either along $\textbf{v}$ (for $c > 0$) or against $\textbf{v}$ (for $c < 0$).
• #### Perform operations on matrices and use matrices in applications. N-VM: Perform operations on matrices and use matrices in applications.
1. $(+)$ Use matrices to represent and manipulate data, e.g., to represent payoffs or incidence relationships in a network.
2. $(+)$ Multiply matrices by scalars to produce new matrices, e.g., as when all of the payoffs in a game are doubled.
3. $(+)$ Add, subtract, and multiply matrices of appropriate dimensions.
4. $(+)$ Understand that, unlike multiplication of numbers, matrix multiplication for square matrices is not a commutative operation, but still satisfies the associative and distributive properties.
5. $(+)$ Understand that the zero and identity matrices play a role in matrix addition and multiplication similar to the role of 0 and 1 in the real numbers. The determinant of a square matrix is nonzero if and only if the matrix has a multiplicative inverse.
6. $(+)$ Multiply a vector (regarded as a matrix with one column) by a matrix of suitable dimensions to produce another vector. Work with matrices as transformations of vectors.
7. $(+)$ Work with $2 \times2$ matrices as a transformations of the plane, and interpret the absolute value of the determinant in terms of area.
• #### Interpret the structure of expressions. A-SSE: Interpret the structure of expressions.
1. Interpret expressions that represent a quantity in terms of its context. ${}^{\huge\star}$
1. Interpret parts of an expression, such as terms, factors, and coefficients.
2. Interpret complicated expressions by viewing one or more of their parts as a single entity. For example, interpret $P(1+r)^n$ as the product of $P$ and a factor not depending on $P$.
2. Use the structure of an expression to identify ways to rewrite it. For example, see $x^4 - y^4$ as $(x^2)^2 - (y^2)^2$, thus recognizing it as a difference of squares that can be factored as $(x^2 - y^2)(x^2 + y^2)$.
• #### Write expressions in equivalent forms to solve problems. A-SSE: Write expressions in equivalent forms to solve problems.
1. Choose and produce an equivalent form of an expression to reveal and explain properties of the quantity represented by the expression. ${}^{\huge\star}$
1. Factor a quadratic expression to reveal the zeros of the function it defines.
2. Complete the square in a quadratic expression to reveal the maximum or minimum value of the function it defines.
3. Use the properties of exponents to transform expressions for exponential functions. For example the expression $1.15^t$ can be rewritten as $(1.15^{1/12})^{12t} \approx 1.012^{12t}$ to reveal the approximate equivalent monthly interest rate if the annual rate is $15\%$.
2. Derive the formula for the sum of a finite geometric series (when the common ratio is not 1), and use the formula to solve problems. For example, calculate mortgage payments. ${}^{\huge\star}$
• #### Perform arithmetic operations on polynomials. A-APR: Perform arithmetic operations on polynomials.
1. Understand that polynomials form a system analogous to the integers, namely, they are closed under the operations of addition, subtraction, and multiplication; add, subtract, and multiply polynomials.
• #### Understand the relationship between zeros and factors of polynomials. A-APR: Understand the relationship between zeros and factors of polynomials.
1. Know and apply the Remainder Theorem: For a polynomial $p(x)$ and a number $a$, the remainder on division by $x - a$ is $p(a)$, so $p(a) = 0$ if and only if $(x - a)$ is a factor of $p(x)$.
2. Identify zeros of polynomials when suitable factorizations are available, and use the zeros to construct a rough graph of the function defined by the polynomial.
• #### Use polynomial identities to solve problems. A-APR: Use polynomial identities to solve problems.
1. Prove polynomial identities and use them to describe numerical relationships. For example, the polynomial identity $(x^2 + y^2)^2 = (x^2 - y^2)^2 + (2xy)^2$ can be used to generate Pythagorean triples.
2. $(+)$ Know and apply the Binomial Theorem for the expansion of $(x + y)^n$ in powers of $x$ and $y$ for a positive integer $n$, where $x$ and $y$ are any numbers, with coefficients determined for example by Pascal's Triangle.The Binomial Theorem can be proved by mathematical induction or by a com- binatorial argument.
• #### Rewrite rational expressions. A-APR: Rewrite rational expressions.
1. Rewrite simple rational expressions in different forms; write $\frac{a(x)}{b(x)}$ in the form $q(x) + \frac{r(x)}{b(x)}$, where $a(x)$, $b(x)$, $q(x)$, and $r(x)$ are polynomials with the degree of $r(x)$ less than the degree of $b(x)$, using inspection, long division, or, for the more complicated examples, a computer algebra system.
2. $(+)$ Understand that rational expressions form a system analogous to the rational numbers, closed under addition, subtraction, multiplication, and division by a nonzero rational expression; add, subtract, multiply, and divide rational expressions.
• #### Create equations that describe numbers or relationships. A-CED: Create equations that describe numbers or relationships.
1. Create equations and inequalities in one variable and use them to solve problems. Include equations arising from linear and quadratic functions, and simple rational and exponential functions. ${}^{\huge\star}$
2. Create equations in two or more variables to represent relationships between quantities; graph equations on coordinate axes with labels and scales. ${}^{\huge\star}$
3. Represent constraints by equations or inequalities, and by systems of equations and/or inequalities, and interpret solutions as viable or nonviable options in a modeling context. For example, represent inequalities describing nutritional and cost constraints on combinations of different foods. ${}^{\huge\star}$
4. Rearrange formulas to highlight a quantity of interest, using the same reasoning as in solving equations. For example, rearrange Ohm's law $V = IR$ to highlight resistance $R$. ${}^{\huge\star}$
• #### Understand solving equations as a process of reasoning and explain the reasoning. A-REI: Understand solving equations as a process of reasoning and explain the reasoning.
1. Explain each step in solving a simple equation as following from the equality of numbers asserted at the previous step, starting from the assumption that the original equation has a solution. Construct a viable argument to justify a solution method.
2. Solve simple rational and radical equations in one variable, and give examples showing how extraneous solutions may arise.
• #### Solve equations and inequalities in one variable. A-REI: Solve equations and inequalities in one variable.
1. Solve linear equations and inequalities in one variable, including equations with coefficients represented by letters.
2. Solve quadratic equations in one variable.
1. Use the method of completing the square to transform any quadratic equation in $x$ into an equation of the form $(x - p)^2 = q$ that has the same solutions. Derive the quadratic formula from this form.
2. Solve quadratic equations by inspection (e.g., for $x^2 = 49$), taking square roots, completing the square, the quadratic formula and factoring, as appropriate to the initial form of the equation. Recognize when the quadratic formula gives complex solutions and write them as $a \pm bi$ for real numbers $a$ and $b$.
• #### Solve systems of equations. A-REI: Solve systems of equations.
1. Prove that, given a system of two equations in two variables, replacing one equation by the sum of that equation and a multiple of the other produces a system with the same solutions.
2. Solve systems of linear equations exactly and approximately (e.g., with graphs), focusing on pairs of linear equations in two variables.
3. Solve a simple system consisting of a linear equation and a quadratic equation in two variables algebraically and graphically. For example, find the points of intersection between the line $y = -3x$ and the circle $x^2 + y^2 = 3$.
4. $(+)$ Represent a system of linear equations as a single matrix equation in a vector variable.
5. $(+)$ Find the inverse of a matrix if it exists and use it to solve systems of linear equations (using technology for matrices of dimension $3 \times 3$ or greater).
• #### Represent and solve equations and inequalities graphically. A-REI: Represent and solve equations and inequalities graphically.
1. Understand that the graph of an equation in two variables is the set of all its solutions plotted in the coordinate plane, often forming a curve (which could be a line).
2. Explain why the $x$-coordinates of the points where the graphs of the equations $y = f(x)$ and $y = g(x)$ intersect are the solutions of the equation $f(x) = g(x)$; find the solutions approximately, e.g., using technology to graph the functions, make tables of values, or find successive approximations. Include cases where $f(x)$ and/or $g(x)$ are linear, polynomial, rational, absolute value, exponential, and logarithmic functions. ${}^{\huge\star}$
3. Graph the solutions to a linear inequality in two variables as a half-plane (excluding the boundary in the case of a strict inequality), and graph the solution set to a system of linear inequalities in two variables as the intersection of the corresponding half-planes.
• #### Understand the concept of a function and use function notation. F-IF: Understand the concept of a function and use function notation.
1. Understand that a function from one set (called the domain) to another set (called the range) assigns to each element of the domain exactly one element of the range. If $f$ is a function and $x$ is an element of its domain, then $f(x)$ denotes the output of $f$ corresponding to the input $x$. The graph of $f$ is the graph of the equation $y = f(x)$.
2. Use function notation, evaluate functions for inputs in their domains, and interpret statements that use function notation in terms of a context.
3. Recognize that sequences are functions, sometimes defined recursively, whose domain is a subset of the integers. For example, the Fibonacci sequence is defined recursively by $f(0) = f(1) = 1$, $f(n+1) = f(n) + f(n-1)$ for $n \ge 1$.
• #### Interpret functions that arise in applications in terms of the context. F-IF: Interpret functions that arise in applications in terms of the context.
1. For a function that models a relationship between two quantities, interpret key features of graphs and tables in terms of the quantities, and sketch graphs showing key features given a verbal description of the relationship. Key features include: intercepts; intervals where the function is increasing, decreasing, positive, or negative; relative maximums and minimums; symmetries; end behavior; and periodicity. ${}^{\huge\star}$
2. Relate the domain of a function to its graph and, where applicable, to the quantitative relationship it describes. For example, if the function $h(n)$ gives the number of person-hours it takes to assemble $n$ engines in a factory, then the positive integers would be an appropriate domain for the function. ${}^{\huge\star}$
3. Calculate and interpret the average rate of change of a function (presented symbolically or as a table) over a specified interval. Estimate the rate of change from a graph. ${}^{\huge\star}$
• #### Analyze functions using different representations. F-IF: Analyze functions using different representations.
1. Graph functions expressed symbolically and show key features of the graph, by hand in simple cases and using technology for more complicated cases. ${}^{\huge\star}$
1. Graph linear and quadratic functions and show intercepts, maxima, and minima.
2. Graph square root, cube root, and piecewise-defined functions, including step functions and absolute value functions.
3. Graph polynomial functions, identifying zeros when suitable factorizations are available, and showing end behavior.
4. Graph rational functions, identifying zeros and asymptotes when suitable factorizations are available, and showing end behavior.
5. Graph exponential and logarithmic functions, showing intercepts and end behavior, and trigonometric functions, showing period, midline, and amplitude.
2. Write a function defined by an expression in different but equivalent forms to reveal and explain different properties of the function.
1. Use the process of factoring and completing the square in a quadratic function to show zeros, extreme values, and symmetry of the graph, and interpret these in terms of a context.
2. Use the properties of exponents to interpret expressions for exponential functions. For example, identify percent rate of change in functions such as $y = (1.02)^t$, $y = (0.97)^t$, $y = (1.01)^{12t}$, $y = (1.2)^{t/10}$, and classify them as representing exponential growth or decay.
3. Compare properties of two functions each represented in a different way (algebraically, graphically, numerically in tables, or by verbal descriptions). For example, given a graph of one quadratic function and an algebraic expression for another, say which has the larger maximum.
• #### Build a function that models a relationship between two quantities. F-BF: Build a function that models a relationship between two quantities.
1. Write a function that describes a relationship between two quantities. ${}^{\huge\star}$
1. Determine an explicit expression, a recursive process, or steps for calculation from a context.
2. Combine standard function types using arithmetic operations. For example, build a function that models the temperature of a cooling body by adding a constant function to a decaying exponential, and relate these functions to the model.
3. Compose functions. For example, if $T(y)$ is the temperature in the atmosphere as a function of height, and $h(t)$ is the height of a weather balloon as a function of time, then $T(h(t))$ is the temperature at the location of the weather balloon as a function of time.
2. Write arithmetic and geometric sequences both recursively and with an explicit formula, use them to model situations, and translate between the two forms. ${}^{\huge\star}$
• #### Build new functions from existing functions. F-BF: Build new functions from existing functions.
1. Identify the effect on the graph of replacing $f(x)$ by $f(x) + k$, $k f(x)$, $f(kx)$, and $f(x + k)$ for specific values of $k$ (both positive and negative); find the value of $k$ given the graphs. Experiment with cases and illustrate an explanation of the effects on the graph using technology. Include recognizing even and odd functions from their graphs and algebraic expressions for them.
2. Find inverse functions.
1. Solve an equation of the form $f(x) = c$ for a simple function $f$ that has an inverse and write an expression for the inverse. For example, $f(x) =2 x^3$ or $f(x) = (x+1)/(x-1)$ for $x \neq 1$.
2. Verify by composition that one function is the inverse of another.
3. Read values of an inverse function from a graph or a table, given that the function has an inverse.
4. Produce an invertible function from a non-invertible function by restricting the domain.
3. $(+)$ Understand the inverse relationship between exponents and logarithms and use this relationship to solve problems involving logarithms and exponents.
• #### Construct and compare linear, quadratic, and exponential models and solve problems. F-LE: Construct and compare linear, quadratic, and exponential models and solve problems.
1. Distinguish between situations that can be modeled with linear functions and with exponential functions. ${}^{\huge\star}$
1. Prove that linear functions grow by equal differences over equal intervals, and that exponential functions grow by equal factors over equal intervals.
2. Recognize situations in which one quantity changes at a constant rate per unit interval relative to another.
3. Recognize situations in which a quantity grows or decays by a constant percent rate per unit interval relative to another.
2. Construct linear and exponential functions, including arithmetic and geometric sequences, given a graph, a description of a relationship, or two input-output pairs (include reading these from a table). ${}^{\huge\star}$
3. Observe using graphs and tables that a quantity increasing exponentially eventually exceeds a quantity increasing linearly, quadratically, or (more generally) as a polynomial function. ${}^{\huge\star}$
4. For exponential models, express as a logarithm the solution to $ab^{ct} = d$ where $a$, $c$, and $d$ are numbers and the base $b$ is 2, 10, or $e$; evaluate the logarithm using technology. ${}^{\huge\star}$
• #### Interpret expressions for functions in terms of the situation they model. F-LE: Interpret expressions for functions in terms of the situation they model.
1. Interpret the parameters in a linear or exponential function in terms of a context. ${}^{\huge\star}$
• #### Extend the domain of trigonometric functions using the unit circle. F-TF: Extend the domain of trigonometric functions using the unit circle.
1. Understand radian measure of an angle as the length of the arc on the unit circle subtended by the angle.
2. Explain how the unit circle in the coordinate plane enables the extension of trigonometric functions to all real numbers, interpreted as radian measures of angles traversed counterclockwise around the unit circle.
3. $(+)$ Use special triangles to determine geometrically the values of sine, cosine, tangent for $\pi/3$, $\pi/4$ and $\pi/6$, and use the unit circle to express the values of sine, cosines, and tangent for $\pi - x$, $\pi + x$, and $2\pi - x$ in terms of their values for $x$, where $x$ is any real number.
4. $(+)$ Use the unit circle to explain symmetry (odd and even) and periodicity of trigonometric functions.
• #### Model periodic phenomena with trigonometric functions. F-TF: Model periodic phenomena with trigonometric functions.
1. Choose trigonometric functions to model periodic phenomena with specified amplitude, frequency, and midline. ${}^{\huge\star}$
2. $(+)$ Understand that restricting a trigonometric function to a domain on which it is always increasing or always decreasing allows its inverse to be constructed.
3. $(+)$ Use inverse functions to solve trigonometric equations that arise in modeling contexts; evaluate the solutions using technology, and interpret them in terms of the context. ${}^{\huge\star}$
• #### Prove and apply trigonometric identities. F-TF: Prove and apply trigonometric identities.
1. Prove the Pythagorean identity $\sin^2(\theta) + \cos^2(\theta) = 1$ and use it to find $\sin(\theta)$, $\cos(\theta)$, or $\tan(\theta)$ given $\sin(\theta)$, $\cos(\theta)$, or $\tan(\theta)$ and the quadrant of the angle.
2. $(+)$ Prove the addition and subtraction formulas for sine, cosine, and tangent and use them to solve problems.
• #### Experiment with transformations in the plane G-CO: Experiment with transformations in the plane
1. Know precise definitions of angle, circle, perpendicular line, parallel line, and line segment, based on the undefined notions of point, line, distance along a line, and distance around a circular arc.
2. Represent transformations in the plane using, e.g., transparencies and geometry software; describe transformations as functions that take points in the plane as inputs and give other points as outputs. Compare transformations that preserve distance and angle to those that do not (e.g., translation versus horizontal stretch).
3. Given a rectangle, parallelogram, trapezoid, or regular polygon, describe the rotations and reflections that carry it onto itself.
4. Develop definitions of rotations, reflections, and translations in terms of angles, circles, perpendicular lines, parallel lines, and line segments.
5. Given a geometric figure and a rotation, reflection, or translation, draw the transformed figure using, e.g., graph paper, tracing paper, or geometry software. Specify a sequence of transformations that will carry a given figure onto another.
• #### Understand congruence in terms of rigid motions G-CO: Understand congruence in terms of rigid motions
1. Use geometric descriptions of rigid motions to transform figures and to predict the effect of a given rigid motion on a given figure; given two figures, use the definition of congruence in terms of rigid motions to decide if they are congruent.
2. Use the definition of congruence in terms of rigid motions to show that two triangles are congruent if and only if corresponding pairs of sides and corresponding pairs of angles are congruent.
3. Explain how the criteria for triangle congruence (ASA, SAS, and SSS) follow from the definition of congruence in terms of rigid motions.
• #### Prove geometric theorems G-CO: Prove geometric theorems
1. Prove theorems about lines and angles. Theorems include: vertical angles are congruent; when a transversal crosses parallel lines, alternate interior angles are congruent and corresponding angles are congruent; points on a perpendicular bisector of a line segment are exactly those equidistant from the segment's endpoints.
2. Prove theorems about triangles. Theorems include: measures of interior angles of a triangle sum to $180^\circ$; base angles of isosceles triangles are congruent; the segment joining midpoints of two sides of a triangle is parallel to the third side and half the length; the medians of a triangle meet at a point.
3. Prove theorems about parallelograms. Theorems include: opposite sides are congruent, opposite angles are congruent, the diagonals of a parallelogram bisect each other, and conversely, rectangles are parallelograms with congruent diagonals.
• #### Make geometric constructions G-CO: Make geometric constructions
1. Make formal geometric constructions with a variety of tools and methods (compass and straightedge, string, reflective devices, paper folding, dynamic geometric software, etc.). Copying a segment; copying an angle; bisecting a segment; bisecting an angle; constructing perpendicular lines, including the perpendicular bisector of a line segment; and constructing a line parallel to a given line through a point not on the line.
2. Construct an equilateral triangle, a square, and a regular hexagon inscribed in a circle.
• #### Understand similarity in terms of similarity transformations G-SRT: Understand similarity in terms of similarity transformations
1. Verify experimentally the properties of dilations given by a center and a scale factor:
1. A dilation takes a line not passing through the center of the dilation to a parallel line, and leaves a line passing through the center unchanged.
2. The dilation of a line segment is longer or shorter in the ratio given by the scale factor.
2. Given two figures, use the definition of similarity in terms of similarity transformations to decide if they are similar; explain using similarity transformations the meaning of similarity for triangles as the equality of all corresponding pairs of angles and the proportionality of all corresponding pairs of sides.
3. Use the properties of similarity transformations to establish the AA criterion for two triangles to be similar.
• #### Prove theorems involving similarity G-SRT: Prove theorems involving similarity
1. Prove theorems about triangles. Theorems include: a line parallel to one side of a triangle divides the other two proportionally, and conversely; the Pythagorean Theorem proved using triangle similarity.
2. Use congruence and similarity criteria for triangles to solve problems and to prove relationships in geometric figures.
• #### Define trigonometric ratios and solve problems involving right triangles G-SRT: Define trigonometric ratios and solve problems involving right triangles
1. Understand that by similarity, side ratios in right triangles are properties of the angles in the triangle, leading to definitions of trigonometric ratios for acute angles.
2. Explain and use the relationship between the sine and cosine of complementary angles.
3. Use trigonometric ratios and the Pythagorean Theorem to solve right triangles in applied problems. ${}^{\huge\star}$
• #### Apply trigonometry to general triangles G-SRT: Apply trigonometry to general triangles
1. $(+)$ Derive the formula $A = 1/2 ab \sin(C)$ for the area of a triangle by drawing an auxiliary line from a vertex perpendicular to the opposite side.
2. $(+)$ Prove the Laws of Sines and Cosines and use them to solve problems.
3. $(+)$ Understand and apply the Law of Sines and the Law of Cosines to find unknown measurements in right and non-right triangles (e.g., surveying problems, resultant forces).
• #### Understand and apply theorems about circles G-C: Understand and apply theorems about circles
1. Prove that all circles are similar.
2. Identify and describe relationships among inscribed angles, radii, and chords. Include the relationship between central, inscribed, and circumscribed angles; inscribed angles on a diameter are right angles; the radius of a circle is perpendicular to the tangent where the radius intersects the circle.
3. Construct the inscribed and circumscribed circles of a triangle, and prove properties of angles for a quadrilateral inscribed in a circle.
4. $(+)$ Construct a tangent line from a point outside a given circle to the circle.
• #### Find arc lengths and areas of sectors of circles G-C: Find arc lengths and areas of sectors of circles
1. Derive using similarity the fact that the length of the arc intercepted by an angle is proportional to the radius, and define the radian measure of the angle as the constant of proportionality; derive the formula for the area of a sector.
• #### Translate between the geometric description and the equation for a conic section G-GPE: Translate between the geometric description and the equation for a conic section
1. Derive the equation of a circle of given center and radius using the Pythagorean Theorem; complete the square to find the center and radius of a circle given by an equation.
2. Derive the equation of a parabola given a focus and directrix.
3. $(+)$ Derive the equations of ellipses and hyperbolas given the foci, using the fact that the sum or difference of distances from the foci is constant.
• #### Use coordinates to prove simple geometric theorems algebraically G-GPE: Use coordinates to prove simple geometric theorems algebraically
1. Use coordinates to prove simple geometric theorems algebraically. For example, prove or disprove that a figure defined by four given points in the coordinate plane is a rectangle; prove or disprove that the point $(1, \sqrt{3})$ lies on the circle centered at the origin and containing the point $(0, 2)$.
2. Prove the slope criteria for parallel and perpendicular lines and use them to solve geometric problems (e.g., find the equation of a line parallel or perpendicular to a given line that passes through a given point).
3. Find the point on a directed line segment between two given points that partitions the segment in a given ratio.
4. Use coordinates to compute perimeters of polygons and areas of triangles and rectangles, e.g., using the distance formula. ${}^{\huge\star}$
• #### Explain volume formulas and use them to solve problems G-GMD: Explain volume formulas and use them to solve problems
1. Give an informal argument for the formulas for the circumference of a circle, area of a circle, volume of a cylinder, pyramid, and cone. Use dissection arguments, Cavalieri's principle, and informal limit arguments.
2. $(+)$ Give an informal argument using Cavalieri's principle for the formulas for the volume of a sphere and other solid figures.
3. Use volume formulas for cylinders, pyramids, cones, and spheres to solve problems. ${}^{\huge\star}$
• #### Visualize relationships between two-dimensional and three-dimensional objects G-GMD: Visualize relationships between two-dimensional and three-dimensional objects
1. Identify the shapes of two-dimensional cross-sections of three-dimensional objects, and identify three-dimensional objects generated by rotations of two-dimensional objects.
• #### Apply geometric concepts in modeling situations G-MG: Apply geometric concepts in modeling situations
1. Use geometric shapes, their measures, and their properties to describe objects (e.g., modeling a tree trunk or a human torso as a cylinder). ${}^{\huge\star}$
2. Apply concepts of density based on area and volume in modeling situations (e.g., persons per square mile, BTUs per cubic foot). ${}^{\huge\star}$
3. Apply geometric methods to solve design problems (e.g., designing an object or structure to satisfy physical constraints or minimize cost; working with typographic grid systems based on ratios). ${}^{\huge\star}$
• #### Summarize, represent, and interpret data on a single count or measurement variable S-ID: Summarize, represent, and interpret data on a single count or measurement variable
1. Represent data with plots on the real number line (dot plots, histograms, and box plots). ${}^{\huge\star}$
2. Use statistics appropriate to the shape of the data distribution to compare center (median, mean) and spread (interquartile range, standard deviation) of two or more different data sets. ${}^{\huge\star}$
3. Interpret differences in shape, center, and spread in the context of the data sets, accounting for possible effects of extreme data points (outliers). ${}^{\huge\star}$
4. Use the mean and standard deviation of a data set to fit it to a normal distribution and to estimate population percentages. Recognize that there are data sets for which such a procedure is not appropriate. Use calculators, spreadsheets, and tables to estimate areas under the normal curve. ${}^{\huge\star}$
• #### Summarize, represent, and interpret data on two categorical and quantitative variables S-ID: Summarize, represent, and interpret data on two categorical and quantitative variables
1. Summarize categorical data for two categories in two-way frequency tables. Interpret relative frequencies in the context of the data (including joint, marginal, and conditional relative frequencies). Recognize possible associations and trends in the data. ${}^{\huge\star}$
2. Represent data on two quantitative variables on a scatter plot, and describe how the variables are related. ${}^{\huge\star}$
1. Fit a function to the data; use functions fitted to data to solve problems in the context of the data. Use given functions or choose a function suggested by the context. Emphasize linear, quadratic, and exponential models.
2. Informally assess the fit of a function by plotting and analyzing residuals.
3. Fit a linear function for a scatter plot that suggests a linear association.
• #### Interpret linear models S-ID: Interpret linear models
1. Interpret the slope (rate of change) and the intercept (constant term) of a linear model in the context of the data. ${}^{\huge\star}$
2. Compute (using technology) and interpret the correlation coefficient of a linear fit. ${}^{\huge\star}$
3. Distinguish between correlation and causation. ${}^{\huge\star}$
• #### Understand and evaluate random processes underlying statistical experiments S-IC: Understand and evaluate random processes underlying statistical experiments
1. Understand statistics as a process for making inferences about population parameters based on a random sample from that population. ${}^{\huge\star}$
2. Decide if a specified model is consistent with results from a given data-generating process, e.g., using simulation. For example, a model says a spinning coin falls heads up with probability $0.5$. Would a result of $5$ tails in a row cause you to question the model? ${}^{\huge\star}$
• #### Make inferences and justify conclusions from sample surveys, experiments, and observational studies S-IC: Make inferences and justify conclusions from sample surveys, experiments, and observational studies
1. Recognize the purposes of and differences among sample surveys, experiments, and observational studies; explain how randomization relates to each. ${}^{\huge\star}$
2. Use data from a sample survey to estimate a population mean or proportion; develop a margin of error through the use of simulation models for random sampling. ${}^{\huge\star}$
3. Use data from a randomized experiment to compare two treatments; use simulations to decide if differences between parameters are significant. ${}^{\huge\star}$
4. Evaluate reports based on data. ${}^{\huge\star}$
• #### Understand independence and conditional probability and use them to interpret data S-CP: Understand independence and conditional probability and use them to interpret data
1. Describe events as subsets of a sample space (the set of outcomes) using characteristics (or categories) of the outcomes, or as unions, intersections, or complements of other events (“or,” “and,” “not”). ${}^{\huge\star}$
2. Understand that two events $A$ and $B$ are independent if the probability of $A$ and $B$ occurring together is the product of their probabilities, and use this characterization to determine if they are independent. ${}^{\huge\star}$
3. Understand the conditional probability of $A$ given $B$ as $P(\mbox{A and B})/P(B)$, and interpret independence of $A$ and $B$ as saying that the conditional probability of $A$ given $B$ is the same as the probability of $A$, and the conditional probability of $B$ given $A$ is the same as the probability of $B$. ${}^{\huge\star}$
4. Construct and interpret two-way frequency tables of data when two categories are associated with each object being classified. Use the two-way table as a sample space to decide if events are independent and to approximate conditional probabilities. For example, collect data from a random sample of students in your school on their favorite subject among math, science, and English. Estimate the probability that a randomly selected student from your school will favor science given that the student is in tenth grade. Do the same for other subjects and compare the results. ${}^{\huge\star}$
5. Recognize and explain the concepts of conditional probability and independence in everyday language and everyday situations. For example, compare the chance of having lung cancer if you are a smoker with the chance of being a smoker if you have lung cancer. ${}^{\huge\star}$
• #### Use the rules of probability to compute probabilities of compound events in a uniform probability model S-CP: Use the rules of probability to compute probabilities of compound events in a uniform probability model
1. Find the conditional probability of $A$ given $B$ as the fraction of $B$'s outcomes that also belong to $A$, and interpret the answer in terms of the model. ${}^{\huge\star}$
2. Apply the Addition Rule, $P(\mbox{A or B}) = P(A) + P(B) - P(\mbox{A and B})$, and interpret the answer in terms of the model. ${}^{\huge\star}$
3. $(+)$ Apply the general Multiplication Rule in a uniform probability model, $P(\mbox{A and B}) = P(A)P(B|A) = P(B)P(A|B)$, and interpret the answer in terms of the model. ${}^{\huge\star}$
4. $(+)$ Use permutations and combinations to compute probabilities of compound events and solve problems. ${}^{\huge\star}$
• #### Calculate expected values and use them to solve problems S-MD: Calculate expected values and use them to solve problems
1. $(+)$ Define a random variable for a quantity of interest by assigning a numerical value to each event in a sample space; graph the corresponding probability distribution using the same graphical displays as for data distributions. ${}^{\huge\star}$
2. $(+)$ Calculate the expected value of a random variable; interpret it as the mean of the probability distribution. ${}^{\huge\star}$
3. $(+)$ Develop a probability distribution for a random variable defined for a sample space in which theoretical probabilities can be calculated; find the expected value. For example, find the theoretical probability distribution for the number of correct answers obtained by guessing on all five questions of a multiple-choice test where each question has four choices, and find the expected grade under various grading schemes. ${}^{\huge\star}$
4. $(+)$ Develop a probability distribution for a random variable defined for a sample space in which probabilities are assigned empirically; find the expected value. For example, find a current data distribution on the number of TV sets per household in the United States, and calculate the expected number of sets per household. How many TV sets would you expect to find in 100 randomly selected households? ${}^{\huge\star}$
• #### Use probability to evaluate outcomes of decisions S-MD: Use probability to evaluate outcomes of decisions
1. $(+)$ Weigh the possible outcomes of a decision by assigning probabilities to payoff values and finding expected values. ${}^{\huge\star}$
1. Find the expected payoff for a game of chance. For example, find the expected winnings from a state lottery ticket or a game at a fast-food restaurant.
2. Evaluate and compare strategies on the basis of expected values. For example, compare a high-deductible versus a low-deductible automobile insurance policy using various, but reasonable, chances of having a minor or a major accident.
2. $(+)$ Use probabilities to make fair decisions (e.g., drawing by lots, using a random number generator). ${}^{\huge\star}$
3. $(+)$ Analyze decisions and strategies using probability concepts (e.g., product testing, medical testing, pulling a hockey goalie at the end of a game). ${}^{\huge\star}$
hs_illuminated_illustrables fragment rendered at 2013-05-18 17:34:41 +0000 |
# NAG Library Function Document
## 1Purpose
nag_prob_vavilov (g01euc) returns the value of the Vavilov distribution function ${\Phi }_{V}\left(\lambda \text{;}\kappa ,{\beta }^{2}\right)$.
It is intended to be used after a call to nag_init_vavilov (g01zuc).
## 2Specification
#include #include
double nag_prob_vavilov (double x, const double comm_arr[])
## 3Description
nag_prob_vavilov (g01euc) evaluates an approximation to the Vavilov distribution function ${\Phi }_{V}\left(\lambda \text{;}\kappa ,{\beta }^{2}\right)$ given by
$ΦVλ;κ,β2=∫-∞λϕVλ;κ,β2dλ,$
where $\varphi \left(\lambda \right)$ is described in nag_prob_density_vavilov (g01muc). The method used is based on Fourier expansions. Further details can be found in Schorr (1974).
## 4References
Schorr B (1974) Programs for the Landau and the Vavilov distributions and the corresponding random numbers Comp. Phys. Comm. 7 215–224
## 5Arguments
1: $\mathbf{x}$doubleInput
On entry: the argument $\lambda$ of the function.
2: $\mathbf{comm_arr}\left[322\right]$const doubleCommunication Array
On entry: this must be the same argument comm_arr as returned by a previous call to nag_init_vavilov (g01zuc).
None.
## 7Accuracy
At least five significant digits are usually correct.
## 8Parallelism and Performance
nag_prob_vavilov (g01euc) is not threaded in any implementation.
nag_prob_vavilov (g01euc) can be called repeatedly with different values of $\lambda$ provided that the values of $\kappa$ and ${\beta }^{2}$ remain unchanged between calls. Otherwise, nag_init_vavilov (g01zuc) must be called again. This is illustrated in Section 10.
## 10Example
This example evaluates ${\Phi }_{V}\left(\lambda \text{;}\kappa ,{\beta }^{2}\right)$ at $\lambda =0.1$, $\kappa =2.5$ and ${\beta }^{2}=0.7$, and prints the results.
### 10.1Program Text
Program Text (g01euce.c)
### 10.2Program Data
Program Data (g01euce.d)
### 10.3Program Results
Program Results (g01euce.r)
© The Numerical Algorithms Group Ltd, Oxford, UK. 2017 |
# Fluid food is pumped through a pipeline with a decrease in the pipe diameter. The velocity in the...
## Question:
Fluid food is pumped through a pipeline with a decrease in the pipe diameter. The velocity in the first pipe in 1.6 m/s and the velocity in the second pipe is 120% of that value. Assuming turbulent flow, what is the change in kinetic energy of fluid (in J/kg) in going from the first pipe to the second pipe?
## Kinetic Energy:
The term kinetic energy can be defined as when the object is moving, and then the energy exhibits are said to be kinetic energy. The kinetic energy can be calculated or measured in Joules.
Given data:
• The velocity in the first pipe is {eq}{v_1} = 1.6\,{\rm{m/s}} {/eq}
• The velocity in the second pipe is {eq}{v_2} = 120\% \times {v_1} = 1.2 \times 1.6 = 1.92\,{\rm{m/s}} {/eq}
The expression for the decrease in the kinetic energy per unit mass is given by
{eq}\Delta K.e = \dfrac{1}{2}\left[ {{v_2}^2 - {v_1}^2} \right] {/eq}
Substituting the values in the above equation as,
{eq}\begin{align*} \Delta K.e &= \dfrac{1}{2}\left[ {{v_2}^2 - {v_1}^2} \right]\\ \Delta K.e &= \dfrac{1}{2}\left[ {{{\left( {1.92} \right)}^2} - {{\left( {1.6} \right)}^2}} \right]\\ \Delta K.e &= 0.5632\,{\rm{J/kg}} \end{align*} {/eq}
Thus the change in the kinetic energy per unit mass is {eq}\Delta K.e = 0.5632\,{\rm{J/kg}} {/eq} |
I don't quite get Newton's third law
1. Jan 25, 2010
physgirl
i don't quite get Newton's third law... :(
It sounds simple enough, F12 = -F21. However, I'm getting that mixed up with the second law now (ie. if there is a net F on the system, there will be acceleration).
For instance, say that there's a box, and the "F" vector is pointing to the right. However, there's another force (say, F2) pointing to the left. If F is greater than F2, then the box will accelerate to the right according to F-F2=ma. Correct?
But... doesn't the Third law say that for any force acting on this box, there's always another "force" (opposite in sign, equal in magnitude) ALSO acting on the box? As in... in the above scenario, there's -F acting upon the box to counter F, and -F2 acting upon the box to counter F2... This confuses me, because this is implying that there's no net force on the system right? Because by the Third law, there's some force (-F21) canceling out the force in interest (F12)?!
Where's my misunderstanding coming from?
Thanks in advance for any input :)
2. Jan 25, 2010
DylanB
Re: i don't quite get Newton's third law... :(
Here is the confusion. Newton's 3rd law describes in the simplest case, two objects in contact with each other will exert equal and opposite forces on each other. For example, if the force F on your box is caused by your hand pushing the box to the right, then there will be an equal force from the box pushing on your hand (not the box) to the left.
3. Jan 25, 2010
rcgldr
Re: i don't quite get Newton's third law... :(
Newton's third law takes into account the reaction force related to acceleration of an object. This reaction force does not cancel out the force that is causing the acceleration, it's just a reaction to the acceleration caused by a force.
4. Jan 26, 2010
D H
Staff Emeritus
Re: i don't quite get Newton's third law... :(
Piling on Dylan's response, the third law counterpart to a force always acts on some other body. Moreover, third law pairs are always the same kind of force. For example, the Earth exerts a gravitational force on the Moon, and the Moon exerts an equal but opposite gravitational force on the Earth.
A more complicated example: Think of a sled sitting the ground. The forces acting on the sled are gravity (downward) and the normal force (upward). The net force acting on the sled is zero1 as the sled isn't moving. This does not mean that gravity and the normal force are third law counterparts. Both forces act on the sled and the two forces are different kinds of forces. One is gravitational and the other is electrostatic repulsion. The third law counterparts of these forces are the gravitational force and electrostatic repulsion exerted by the sled on the Earth. This example can be made even more complex by adding a person pulling a sled with a rope. There are *lots* of third law pairs here.
------------------------
1From an inertial perspective, the sled is moving; it is sitting still on the rotating Earth. The sled undergoes uniform circular motion about the Earth's rotation axis. The gravitational and normal forces on the sled are neither equal in magnitude nor opposite in direction. The vector sum of the normal and gravitational force do not quite cancel.
5. Jan 26, 2010
Asok
Re: i don't quite get Newton's third law... :(
the third law confusion.. well .. it can be explained like this... the force and the reaction which describe in the third law are acting on different objects.. one force act on the box and the other act on the the hand.. both are same and opposite in direction.. if you take the box alone only one force acts on it.. the reaction of that force acts on the hand.. so to the box apply F=ma then you can have one unbalanced 'F', hence the acceleraion by that force..
Another thing.. say you draw a box which is in stable on a table and you are marking the forces.. then probably you will mark 'mg' downwards and the reaction to the upward by the table.. but never get mixed up, those are not the force and the corresponding reaction which newton says in his third law..the reaction for the 'mg' is unmarked force at the earth's center of gravity which acts upwards.. and the 'reaction' for the 'reaction force' which acts from the table on the box, is the force which acts downwards and acts on the table from the box.. mmm. i think you better sketch this using four forces.. (you will have to draw the earth's center too) |
# Presidential Probit Model: A Working Paper
February 23, 2012 18:30
I wrote up the Presidential Probit Model work as a working paper that is available from the Social Science Research Network.
# Towards Non-Linear Models on Quantized Data: A Support Vector Machine for the Presidential Election Data
August 03, 2010 23:55
To finalize our analysis of the Presidential election data, I will quickly compare the usage of a Support Vector Machine, which is a modern classification engine based on research by Vapnik et al., with that of the probit model. The SVM is implemented in several packages for R; in the following we will use that from the kernlab package. The code fragments below illustrate how to compute the probit and SVM models on our data.
presdata$PROBIT<-predict(pmodel,type='response') library(kernlab) kmodel<-ksvm(PRES~HEIGHT+CHANGE,type='C-svc',data=presdata) presdata$SVM<-predict(kmodel)
The table below illustrates the output of both models:
YEAR PROBIT SVM PRES 1896 0.15 0 1 1900 0.60 1 1 1904 0.24 0 1 1908 0.69 1 1 1912 0.34 0 0 1916 0.45 0 0 1920 1.00 1 1 1924 0.71 1 1 1928 0.74 1 1 1932 0.15 0 0 1936 0.03 0 0 1940 0.80 1 0 1944 0.25 0 0 1948 0.80 1 0 1952 0.90 1 1 1956 0.90 1 1 1960 0.39 0 0 1964 0.15 0 0 1968 0.90 1 1 1972 0.84 1 1 1976 0.34 0 0 1980 0.83 1 1 1984 0.96 1 1 1988 0.96 1 1 1992 0.45 0 0 1996 0.24 0 0 2000 0.80 1 1 2004 0.48 0 1 2008 0.02 0 0
We see from the above table that the SVM is categorizing the data quite accurately, but it can tell us nothing about the likelihood of a particular outcome — because that is not what it has been asked to do. We also built our SVM using the parameter set indicated by our probit analysis, because that system gave us tools to investigate those choices. However, the SVM did accomplish something that we only implicitly asked of the probit analysis, which is to classify the data. Although the probit probabilities are useful out-of-sample in a Bayesian sense, in-sample they do not help us quantify the success of the method — we have to overlay an ad hoc classification scheme to permit that interpretation.
# Towards Non-Linear Models on Quantized Data: A Probit Analysis of Presidential Elections Part II
August 03, 2010 15:55
To finish off our “toy” example of using Discrete Dependent Variables analysis, following on from the discussion of our data, we will use the probit link function and the NY Times data to test each of our variables for inclusing in a model forecasting the probability to a Republican President. The predictors are:
• height difference (Republican candidate height in feet minus Democrat candidate height in feet);
• weight difference (Republican candidate weight in pounds minus Democrate candidate weight in pounds);
• incumbency i.e. the prior value of the Presidential Party state variable; and,
• change i.e. the value of the Presidential Party state variable at two lags.
I am going to perform this analysis in RATS, but to continue from the prior post I will also illustrate the command used in R to fit such models.
summary(glm(PRES ~ HEIGHT + WEIGHT + INCUMBENCY + CHANGE,
The results are:
Variable Likelihood Ratio Parameter Estimate λ α HEIGHT 6.422 0.01127461 3.1360 ± 1.4517 HEIGHT+WEIGHT 0.487 0.48538310 2.6919 ± 1.6049 HEIGHT+INCUMBENCY 0.761 0.38315424 3.0773 ± 1.4722 HEIGHT+CHANGE 5.273 0.02165453 3.6310 ± 1.5546
This analysis performed on all data up to and including 2004 — replicating the original out-of-sample nature of the 2008 election. With over 98% confidence we reject the null hypothesis and add the HEIGHT variable to the model. We then test the remaining variables and will over 97% confidence add CHANGE to the model including HEIGHT
The above chart illustrates the implied probabilities extracted from the estimated model. (The shading illustrates the actual party selection.) Out of sample, this exhibits a very strong prediction that President Barack Obama would have been elected and, given the unknown stature of his coming opponent, a marginal probability favouring his relection. Finally, after entertaining ourselves with this “toy” model, we will return to applying this methodology to market data in a coming post.
# Towards Non-Linear Models on Quantized Data: A Probit Analysis of Presidential Elections Part I
August 02, 2010 09:00
This post is concerned with an analytical method that could be thought of as a half-way house between the standard linear models that work at lower frequencies and the non-linear models that we are hypothesizing are necessary to deal with highly quantized high frequency data. In common with our prior post, on computing bias in Presidential approval rating opinion polling, it is also using a political not financial data set. However, that merely changes the meaning of the analysis and not the method of analysis. (It actually represents a piece of work I did prior to the recent Presidential election, which happened to use this method, so I thought I'd include it here as it is entertaining.)
What I'm discussing are methods to deal with what are now known as discrete dependent variables in statistics. To my knowledge this methodology was pioneered by Chester Bliss in his analysis on the Probability of Insect Toxicity or Probit Models. Faced with a binary outcome (either the insects die, or they don't), a standard linear regression was unsuitable for estimating this quantity. Instead, Bliss decided to estimate the probability of the outcome via what we now call a link function.
$\textrm{i.e.}\;y_i=\alpha+\mathbf{\beta\cdot x}_i\;\textrm{is replaced with}\;P_i=\Phi(\alpha+\mathbf{\beta\cdot x}_i)$
Here Φ is the c.d.f. for the Normal (or other suitable) distribution. We note that this formalism is identical to that of the Perceptron in which the discriminant output of the system is a sigmoid (i.e. “S” shaped) function of an linear function of the data. However, unlike a typical Neural Net, which generally just describes a functional relationship of some kind, we have a specific probabilistic interpretation of the output of our system. Thus we can adopt the entire toolkit developed for maximum likelihood analysis to this problem. Specifically, if our models do not involve degenerate specifications, we can use the maximum likelihood ratio test to evaluate the utility of various composite hypotheses.
At this point I'm going to describe our data briefly, and then present our analysis in a later post. The data comes from the 6th. October, 2008, edition of the New York Times. Specifically an article entitled The Measure of a President (the currently linked to document contains corrections printed the next day). I have tabulated this data and it is available from this website. I have also included the outcome of the 2008 election, which was unknown at the the time the article was printed (and when my analysis was orignally done). You can download it and read it the analysis programme of your choice. For example, the following will load the data directly into R and then create the needed auxiliary variables.
presdata<-
skip=6,col.names=c("YEAR","D_FT","D_IN","D_LB","R_FT","R_IN","R_LB","PRES"))
presdata$D_HT<-presdata$D_FT+presdata$D_IN/12. presdata$R_HT<-presdata$R_FT+presdata$R_IN/12.
presdata$HEIGHT<-presdata$R_HT-presdata$D_HT presdata$WEIGHT<-presdata$R_LB-presdata$D_LB
presdata$INCUMBENCY<-c(0,presdata$PRES[1:(length(presdata$PRES)-1)]) presdata$CHANGE<-c(1,0,presdata$PRES[1:(length(presdata$PRES)-2)])
This data presents the heights and weights of Presidential candidates since 1896, together with the binary outcome (“0” for a Democrat and “1” for a Republican — and this data is organized alphabetically not by any kind of hidden preference). The light hearted goal of the article was to investigate whether the public prefers the lighter, taller candidate over the shorter, heavier alternate. We are going to augment this data by computing the Body Mass Index, to ask whether the data public preferred the “healthier” candidate and also the first two lags of the Presidential party indicator, which we will call incumbency and change respectively. We will therefore seek to estimate the probability of a Republican President from this data via Probit Analysis. Results to follow.
Theme by Mads Kristensen | Modified by Mooglegiant
#### Top Rated Posts
Quant Traders and Magical Thinking ♦♦♦♦♦ Is Goldman Sachs a Hedge Fund? ♦♦♦♦½ Why Use the Generalized Error Distribution? ♦♦♦♦ If Not Normal then What? ♦♦♦♦ Update on the VIX-GARCH Spread ♦♦♦ VIX vs GARCH: Results from a New Region of Phase Space ♦♦♦ Asymmetric Response is an Attribute of Stock Markets ♦♦♦ The Market is Up and the VIX is Down – How Much Extra Information Do We Have? ♦♦♦
GRAHAM GILLER
Dr. Giller holds a doctorate from Oxford University in experimental elementary particle physics. His field of research was statistical astronomy using high energy cosmic rays. After leaving Oxford, he worked in the Process Driven Trading Group at Morgan Stanley, as a strategy researcher and portfolio manager. He then ran a CTA/CPO firm which concentrated on trading eurodollar futures using statistical models. From 2004, he has managed a private family investment office. In 2009, he joined a California based hedge fund startup, concentrating on high frequency alpha and volatility forecasting. My updated resume is on LinkedIn.
#### Disclaimer
Nothing on this site should be construed as a reccommendation to buy or sell any specific security nor as a solicitation of an order to buy or sell any specific security. Before making any trade for any reason you should consult your own financial advisor. The author may hold long or short positions in any of the securities discussed either before or after publication of an article mentioning such a security. |
# Error in linear regression
In linear regression we have $$Ax=b$$. Since the equality is an approximate equality, an error vector is used, that is, $$Ax+e=b$$. We know that using the least square method (to minimize the squared sum of the elements of $$e$$) the best $$x$$ is given by: $$x=A^+b$$ where the plus sign represents the pseudoinverse: $$A^+=(A^TA)^{-1}A^T$$. Depending on $$A$$ and $$b$$, there must be some error which is often nonzero as in linear regression we are doing a non-perfect curve estimation. However, $$e=b-Ax$$ which is $$e=b-AA^+b$$ and since $$AA^+=I$$ always holds, the error is always zero, that is, $$e=b(I-AA^+)=b(I-I)=zero$$. Why is that? I think the error vector should not be zero regardless of $$A$$ and $$b$$. Can one explain this to me. Thank you!
## 1 Answer
No.
In linear regression the matrix $$A$$ has more rows than columns, and hence (according to the "Definition" section of the wikipedia article) $$AA^+$$ is not the identity matrix, but rather the projection matrix onto the column space of $$A$$.
The normal equations folded into the formula $$x=A^+b$$ forces the fitted error vector $$e=b-AA^+b$$ to perpendicular to the column space of $$A$$, so the analysis of variance (or Pythagorean theorem) $$\|b\|^2=\|Ax\|^2+\|e\|^2$$ holds
• Assume the system has more columns than the rows, then would the $AA^+$ be the identity matrix? – Mahdi Rouholamini Nov 23 at 3:00 |
# Light moves at C from all frames of references?
1. Feb 20, 2008
### Bigman
there are a few things i don't get when it comes to light moving at c from all frames of reference... i mean it makes sense to me in some cases: like if an observer on the earth sees a missle going one way at half the speed of light, and a spaceship going the other way at the speed of light, the spaceship won't observe the missle's speed as the speed of light, since time goes faster on the spaceship then it does to the observer on the earth (at least that's my understanding so far from what i've read... just to double check, is everything i said in that example accurate?)
but i can think of a few examples where it doesn't work out as easily (some of them are harder to explain then others). here's one: you have a space station floating out in the middle of no where in space (this is our initial reference point... i would have used earth, but i wanted to avoid all the gravity and orbits and rotation and stuff) and a ship takes off from the station, and ends up doing about half the speed of light (from the station's frame of reference). since the ship has sped up, the clock on board the ship is now going faster then the clock on board the space station (right?). now, lets say you eject two escape pods from the ship: one out the front, and one out the back(the ship is still facing directly away from the station), and they each shoot out with a velocity which, from the ships frame of reference, has a magnitude equal to the velocity of the space station (which is less then half the speed of light, because time is moving faster on the ship then it is on the space station... right?). what confuses me is, how fast are the clocks on board each of the escape pods going in relation to the spaceship, the space station, and each other?
Last edited: Feb 20, 2008
2. Feb 20, 2008
### chroot
Staff Emeritus
Clocks do not magically change their rate simply because they are moving. If you're on-board a star ship, you will look down at your wristwatch and see it behaving perfectly normally, no matter how fast the star ship is going relative to anything else in the universe.
You must have two frames of reference in order to see any effects of time dilation. If a ship leaves a space station at half the speed of light, it will appear to observers in the space station that clocks aboard the ship are running slowly. Similarly, it will appear to observers on the ship that the clocks aboard the space station are running slowly.
Velocities do not add as simply in special relativity as you are accustomed. If an escape pod leaves the ship at 0.5c wrt the ship, and the ship is moving at 0.5c wrt the space station, the two velocities add like this:
$v = \frac{ 0.5c + 0.5c }{ 1 + \frac{ 0.5c \cdot 0.5c } { c^2 } } = 0.8c$
Observers aboard the space station will measure the escape pod as moving away with a velocity of 0.8c.
- Warren
3. Feb 20, 2008
### yuiop
No material object (anything with greater than zero rest mass) can go at the speed of light relative to any observer.
Generally the clocks on any object moving relative to an observer are measured as "ticking" slower by that observer.
slower
Use the relativistic velocity addition equation to figure out the speed of the pods and then the Lorentz transformations for time to answer this question.
See http://math.ucr.edu/home/baez/physics/Relativity/SR/velocity.html
4. Feb 20, 2008
### Bigman
woops, i meant to say that the ship was moving half the speed of light in the first example. in the second example, i was interested mostly in the time dilation, though i think that makes more sense to me now, after reading warren's post in another thread... so now i have a question that has more to do with light itself: lets say you have a ship flying by a space station at .5c, both the ship and the space station have big bright light bulbs protruding from them, and the moment that the ship flies by the space station, both lights flash for an instant (so basically, you have light coming from two sources, which are in virtually the same spot in space but have different velocities). i'm wondering, if you freeze frame everything a moment later, will the two spheres of light be overlapping each other? and if so, where will the center of these spheres be located, at the position of the ship or the station (or will the center's position be somehow dependant on the frame of reference)?
5. Feb 20, 2008
### Janus
Staff Emeritus
The center of the expanding spheres will depend on the frame of reference.
6. Feb 20, 2008
### Bigman
that can't be possible, can it? lets say you had two sensors attached to the spaceship, one out a mile in front of the ship and one out a mile in the back (imagine long mic booms sticking off the front and back), and you had two more sensors attached to the station in a similar fasion, and when the ship and station pass each other, the two sensors in the front are next to each other, as are the two sensors in the back. would the ship record that both it's sensors went off at the same time? if so, would an observer on the ship say that the light hit the station's front sensor before hitting the station's back sensor, since the ships sensors and the stations sensors are no longer next to each other by the time the light reaches them?
7. Feb 20, 2008
### yuiop
Imagine that the light sources are so close together at the passing point that we can treat them as one light source for practical purposes. From the point of view of the space station there is ring of light centred on the space station. From the point of view of the ship (some moments later) there is a ring of light centered on the ship.
The ship sees itself as stationary and from that point of view it initially sees the space station approaching. At the moment the space station was alongside it sees a flash and then it sees a ring of light spreading out evenly in all directions and the space station moving away.
The spacestation sees itself as stationary and from that point of view it initially sees the ship approaching. At the moment the ship was alongside it sees a flash and then it sees a ring of light spreading out evenly in all directions and the ship moving away.
See the symmetry?
This is what would really be observed, even if there is only one light source like a spark flashing across a small gap between the two craft at they moment they are closest to each other.
8. Feb 21, 2008
### Janus
Staff Emeritus
The ship will say the light hit its sensors simultaneouly, while hitting the sensors of the station at different times. Conversely, the station will say that the light hit it's sensors simultaneouly, while hitting the ships sensors at different times.
Welcome to "The Relativity of Simultaneity".
9. Feb 21, 2008
### Bigman
wow... so if someone in the ship were somehow able to instantaneously observe light, they would observe that the light hit the station's front sensor first, then the ships two sensors, then the stations back sensor(and someone in the station would observe everything i just said, with the words "station" and "ship" switched)?
10. Feb 21, 2008
### Janus
Staff Emeritus
Yes.
Also consider this:
We put clocks at these sensors, all reading a time of zero and designed to start ticking when the sensor next to it is tripped by the light. Then according to the ship, the clocks next to its sensors start at the same time and are synchronized so that they show the same time at all times. The station's clocks, however, will not start at the same time and thus will not show the same time after they are running (once running they both tick at the same rate, but one clock will lag behind the other.) According to the station, the reverse is true. |
# Sum over k of r-k Choose m by s+k Choose n
## Theorem
Let $m, n, r, s \in \Z_{\ge 0}$ such that $n \ge s$.
Then:
$\displaystyle \sum_{k \mathop = 0}^r \binom {r - k} m \binom {s + k} n = \binom {r + s + 1} {m + n + 1}$
where $\dbinom {r - k} m$ etc. are binomial coefficients.
## Proof
$\displaystyle$ $\displaystyle \sum_{k \mathop = 0}^r \binom {r - k} m \binom {s + k} n$ $\displaystyle$ $=$ $\displaystyle \sum_{k \mathop = 0}^r \binom {-\left({m + 1}\right)} {r - k - m} \binom {-\left({n + 1}\right)} {s + k - m} \left({-1}\right)^{r - m + s - m}$ Moving Top Index to Bottom in Binomial Coefficient $\displaystyle$ $=$ $\displaystyle \binom {-\left({m + 1}\right) - \left({n + 1}\right)} {r - m + s - m} \left({-1}\right)^{r - m + s - m}$ Chu-Vandermonde Identity $\displaystyle$ $=$ $\displaystyle \binom {r + s + 1} {m + n + 1}$ Moving Top Index to Bottom in Binomial Coefficient
$\blacksquare$ |
Due: 12 October by 11:00 pm
Purpose, Skills, & Knowledge: The purposes of this assignment are:
• To practice using vectors in R.
• To practice computational problem solving with vectors.
Assessment: Each question indicates the % of the assignment grade, summing to 100%. The credit for each question will be assigned as follows:
• 0% for not attempting a response.
• 50% for attempting the question but with major errors.
• 75% for attempting the question but with minor errors.
• 100% for correctly answering the question.
Rules:
• Problems marked SOLO may not be worked on with other classmates, though you may consult instructors for help.
• For problems marked COLLABORATIVE, you may work in groups of up to 3 students who are in this course this semester. You may not split up the work – everyone must work on every problem. And you may not simply copy any code but rather truly work together.
• Even though you work collaboratively, you still must submit your own solutions.
### 1) Staying organized [SOLO, 5%]
Download and use this template for your assignment. Inside the “hw6” folder, open and edit the R script called “hw6.R” and fill out your name, GW Net ID, and the names of anyone you worked with on this assignment.
### Writing test functions
For each of the following functions, write a test function first, and then write the function. Your test functions will count for half of the available credit for each problem. Think carefully about the test cases to include in your test functions.
### 2) vectorFactorial(n) [SOLO, 10%]
Write the function vectorFactorial(n) which computes the factorial of n using vectors to avoid using a loop. Hint: there are some useful functions listed on the vectors lesson page for performing operators on a numeric vector.
### 3) nthHighestValue(n, x) [SOLO, 15%]
Write a function to find the nth highest value in a given vector. For example, if x equals c(5, 1, 3), then nthHighestValue(1, x) should return 5, because 5 is the 1st highest value in x, and nthHighestValue(2, x) should return 3 because it’s the 2nd highest value in x. Assume only numeric inputs, and assume that n <= length(x). You may not use loops.
### 4) dotProduct(a, b) [COLLABORATIVE, 20%]
Background: the “dot product” of two vectors is the sum of the products of the corresponding terms. So the dot product of the vectors c(1,2,3) and c(4,5,6) is (1*4) + (2*5) + (3*6), or 4 + 10 + 18 = 32. With this in mind, write the function dotProduct(a, b). This function takes two vectors and returns the dot product of those vectors. If the vectors are not equal length, ignore the extra elements in the longer vector. You may not use loops.
### 5) middleValue(a) [COLLABORATIVE, 20%]
Write the function middleValue(a) that takes a vector of numbers a and returns the value of the middle element (or the average of the two middle elements).
### 6) rotateVector(a, n) [COLLABORATIVE, 25%]
Write the function rotateVector(a, n) which takes a vector a and an integer n and returns a new vector where each element in a is shifted to the right by n indices. For example, if a is c(1, 2, 3, 4) and n is 1, the result should be c(4, 1, 2, 3), but if n is -1, the result should be c(2, 3, 4, 1). If n is larger than the length of a, the function should continue to rotate the vector beyond its starting point. So, if a = c(1, 2, 3, 4) and n = 5, then the result should be a = c(4, 1, 2, 3).
### 7) Submit your files [SOLO, 5%]
Create a zip file of all the files in your R project folder for this assignment and submit the zip file on Blackboard (note: to receive full credit, your submission must follow the above format of using a correctly-named R Project and .R script).
EMSE 4574: Programming for Analytics (Fall 2020) |
Tuesdays | 12:45 - 3:15 PM | Dr. John Paul Helveston | jph@gwu.edu
Content 2020 John Paul Helveston. See the licensing page for details. |
## Saturday, October 31, 2015
### Triggered Fragmentation in Self-gravitating Disks
Triggered fragmentation in self-gravitating discs: forming fragments at small radii
Authors:
Meru et al
Abstract:
We carry out three dimensional radiation hydrodynamical simulations of gravitationally unstable discs to explore the movement of mass in a disc following its initial fragmentation. We find that the radial velocity of the gas in some parts of the disc increases by up to a factor of approximately 10 after the disc fragments, compared to before. While the movement of mass occurs in both the inward and outward directions, the inwards movement can cause the inner spirals of a self-gravitating disc to become sufficiently dense such that they can potentially fragment. This suggests that the dynamical behaviour of fragmented discs may cause subsequent fragmentation to occur at smaller radii than initially expected, but only after an initial fragment has formed in the outer disc.
### AA Tauri Star has an Warped Protoplanetary Disk
X-ray to NIR emission from AA Tauri during the dim state - Occultation of the inner disk and gas-to-dust ratio of the absorber
Authors:
Schnieder et al
Abstract:
AA Tau is a well-studied, nearby classical T Tauri star, which is viewed almost edge-on. A warp in its inner disk periodically eclipses the central star, causing a clear modulation of its optical light curve. The system underwent a major dimming event beginning in 2011 caused by an extra absorber, which is most likely associated with additional disk material in the line of sight toward the central source. We present new XMM-Newton X-ray, Hubble Space Telescope FUV, and ground based optical and near-infrared data of the system obtained in 2013 during the long-lasting dim phase. The line width decrease of the fluorescent H2 disk emission shows that the extra absorber is located at r greater than 1au. Comparison of X-ray absorption (NH) with dust extinction (AV), as derived from measurements obtained one inner disk orbit (eight days) after the X-ray measurement, indicates that the gas-to-dust ratio as probed by the NH to AV ratio of the extra absorber is compatible with the ISM ratio. Combining both results suggests that the extra absorber, i.e., material at r greater than 1au, has no significant gas excess in contrast to the elevated gas-to-dust ratio previously derived for material in the inner region (≲0.1au).
### Spiral-driven Accretion in Protoplanetary Disks
Spiral-driven accretion in protoplanetary discs - I. 2D models
Authors:
Lesur et al
Abstract:
We numerically investigate the dynamics of a 2D non-magnetised protoplanetary disc surrounded by an inflow coming from an external envelope. We find that the accretion shock between the disc and the inflow is unstable, leading to the generation of large-amplitude spiral density waves. These spiral waves propagate over long distances, down to radii at least ten times smaller than the accretion shock radius. We measure spiral-driven outward angular momentum transport with 1e-4 less than alpha less than 1e-2 for an inflow accretion rate Mout greater than 1e-8 Msun/yr. We conclude that the interaction of the disc with its envelope leads to long-lived spiral density waves and radial angular momentum transport with rates that cannot be neglected in young non-magnetised protostellar discs.
## Friday, October 30, 2015
### Grain Growth in the Circumstellar Disks of the Young Stars CY Tau and DoAr 25
Grain Growth in the Circumstellar Disks of the Young Stars CY Tau and DoAr 25
Authors:
Pérez et al
Abstract:
We present new results from the Disks@EVLA program for two young stars: CY Tau and DoAr 25. We trace continuum emission arising from their circusmtellar disks from spatially resolved observations, down to tens of AU scales, at {\lambda} = 0.9, 2.8, 8.0, and 9.8 mm for DoAr25 and at {\lambda} = 1.3, 2.8, and 7.1 mm for CY Tau. Additionally, we constrain the amount of emission whose origin is different from thermal dust emission from 5 cm observations. Directly from interferometric data, we find that observations at 7 mm and 1 cm trace emission from a compact disk while millimeter-wave observations trace an extended disk structure. From a physical disk model, where we characterize the disk structure of CY Tau and DoAr 25 at wavelengths shorter than 5 cm, we find that (1) dust continuum emission is optically thin at the observed wavelengths and over the spatial scales studied, (2) a constant value of the dust opacity is not warranted by our observations, and (3) a high-significance radial gradient of the dust opacity spectral index, {\beta}, is consistent with the observed dust emission in both disks, with low-{\beta} in the inner disk and high-{\beta} in the outer disk. Assuming that changes in dust properties arise solely due to changes in the maximum particle size (amax), we constrain radial variations of amax in both disks, from cm-sized particles in the inner disk (R less than 40 AU) to millimeter sizes in the outer disk (R greater than 80 AU). These observational constraints agree with theoretical predictions of the radial-drift barrier, however, fragmentation of dust grains could explain our amax(R) constraints if these disks have lower turbulence and/or if dust can survive high-velocity collisions.
### Non-azimuthal Linear Polarization in Protoplanetary Disks
Non-azimuthal linear polarization in protoplanetary disks
Authors:
Canovas et al
Abstract:
Several studies discussing imaging polarimetry observations of protoplanetary disks use the so-called radial Stokes parameters Q_phi and U_phi to discuss the results. This approach has the advantage of providing a direct measure of the noise in the polarized images under the assumption that the polarization is azimuthal only, i.e., perpendicular to the direction towards the illuminating source. However, a detailed study of the validity of this assumption is currently missing. We aim to test whether departures from azimuthal polarization can naturally be produced by scattering processes in optically thick protoplanetary disks at near infrared wavelengths. We use the radiative transfer code MCFOST to create a generic model of a transition disk using different grain size distributions and dust masses. From these models we generate synthetic polarized images at 2.2\mum. We find that even for moderate inclinations (e.g., i = 40degr), multiple scattering alone can produce significant (up to ~4.5% of the Q_phi image) non-azimuthal polarization reflected in the U_phi images. We also find that different grain populations can naturally produce radial polarization (negative values in the Q_phi images). Our results suggest that caution is recommended when interpreting polarized images by only analyzing the Q_phi and U_phi images. We find that there can be astrophysical signal in the U_phi images and negative values in the Q_phi images, which indicate departures from azimuthal polarization. If significant signal is detected in the U_phi images, we recommend to check the standard Q and U images to look for departures from azimuthal polarization. On the positive side, signal in the U_phi images once all instrumental and data-reduction artifacts have been corrected for means that there is more information to be extracted regarding the dust population and particle density.
### L Class Brown Dwarf WISEP J190648.47+401106.8 Multiyear Observations Show Long Term Clouds
Kepler Monitoring of an L Dwarf II. Clouds with Multiyear Lifetimes
Authors:
Gizis et al
Abstract:
We present Kepler, Spitzer Space Telescope, Gemini-North, MMT, and Kitt Peak observations of the L1 dwarf WISEP J190648.47+401106.8. We find that the Kepler optical light curve is consistent in phase and amplitude over the nearly two years of monitoring with a peak-to-peak amplitude of 1.4%. Spitzer Infrared Array Camera 3.6 micron observations are in phase with Kepler with similar light curve shape and peak-to-peak amplitude 1.1%, but at 4.5 micron, the variability has amplitude less than 0.1%. Chromospheric Hα emission is variable but not synced with the stable Kepler light curve. A single dark spot can reproduce the light curve but is not a unique solution. An inhomogeneous cloud deck, specifically a region of thick cloud cover, can explain the multi-wavelength data of this ultracool dwarf and need not be coupled with the asynchronous magnetic emission variations. The long life of the cloud is in contrast with weather changes seen in cooler brown dwarfs on the timescale of hours and days.
## Thursday, October 29, 2015
### Simulations Suggest Spiral Arms in Protoplanetary Disks Presence of Exoplanets
A team of astronomers is proposing that huge spiral patterns seen around some newborn stars, merely a few million years old (about one percent our sun's age), may be evidence for the presence of giant unseen planets. This idea not only opens the door to a new method of planet detection, but also could offer a look into the early formative years of planet birth.
Though astronomers have cataloged thousands of planets orbiting other stars, the very earliest stages of planet formation are elusive because nascent planets are born and embedded inside vast, pancake-shaped disks of dust and gas encircling newborn stars, known as circumstellar disks.
The conclusion that planets may betray their presence by modifying circumstellar disks on large scales is based on detailed computer modeling of how gas-and-dust disks evolve around newborn stars, which was conducted by two NASA Hubble Fellows, Ruobing Dong of Lawrence Berkeley National Laboratory, and Zhaohuan Zhu of Princeton University. Their research was published in the Aug. 5 edition of The Astrophysical Journal Letters.
### WASP-41 and WASP-47 hot Jupiter Systems Have Other Giant Exoplanets
Hot Jupiters with relatives: discovery of additional planets in orbit around WASP-41 and WASP-47
Authors:
Neveu-VanMalle et al
Abstract:
We report the discovery of two additional planetary companions to WASP-41 and WASP-47. WASP-41 c is a planet of minimum mass 3.18 ± 0.20 MJup, eccentricity 0.29 ± 0.02 and orbiting in 421 ± 2 days. WASP-47 c is a planet of minimum mass 1.24 ± 0.22 MJup, eccentricity 0.13 ± 0.10 and orbiting in 572 ± 7 days. Unlike most of the planetary systems including a hot Jupiter, these two systems with a hot Jupiter have a long period planet located at only ∼1 AU from their host star. WASP-41 is a rather young star known to be chromospherically active. To differentiate its magnetic cycle from the radial velocity effect due the second planet, we use the emission in the Hα line and find this indicator well suited to detect the stellar activity pattern and the magnetic cycle. The analysis of the Rossiter-McLaughlin effect induced by WASP-41 b suggests that the planet could be misaligned, though an aligned orbit cannot be excluded. WASP-47 has recently been found to host two additional transiting super Earths. With such an unprecedented architecture, the WASP-47 system will be very important for the understanding of planetary migration.
### Hot Jupiter WASP-57b has a Shorter Orbital Period and Smaller Than Previously Thought
Larger and faster: revised properties and a shorter orbital period for the WASP-57 planetary system from a pro-am collaboration
Authors:
Southworth et al
Abstract:
Transits in the WASP-57 planetary system have been found to occur half an hour earlier than expected. We present ten transit light curves from amateur telescopes, on which this discovery was based, thirteen transit light curves from professional facilities which confirm and refine this finding, and high-resolution imaging which show no evidence for nearby companions. We use these data to determine a new and precise orbital ephemeris, and measure the physical properties of the system. Our revised orbital period is 4.5s shorter than found from the discovery data alone, which explains the early occurrence of the transits. We also find both the star and planet to be larger and less massive than previously thought. The measured mass and radius of the planet are now consistent with theoretical models of gas giants containing no heavy-element core, as expected for the sub-solar metallicity of the host star. Two transits were observed simultaneously in four passbands. We use the resulting light curves to measure the planet's radius as a function of wavelength, finding that our data are sufficient in principle but not in practise to constrain its atmospheric properties. We conclude with a discussion of the current and future status of transmission photometry studies for probing the atmospheres of gas-giant transiting planets.
### Center-to-limb Variation Very Important to the Interpretation of Planetary Transit Spectroscopy
The center-to-limb variation across the Fraunhofer lines of HD 189733; Sampling the stellar spectrum using a transiting planet
Authors:
Czesla et al
Abstract:
The center-to-limb variation (CLV) describes the brightness of the stellar disk as a function of the limb angle. Across strong absorption lines, the CLV can vary quite significantly. We obtained a densely sampled time series of high-resolution transit spectra of the active planet host star HD 189733 with UVES. Using the passing planetary disk of the hot Jupiter HD 189733 b as a probe, we study the CLV in the wings of the Ca II H and K and Na I D1 and D2 Fraunhofer lines, which are not strongly affected by activity-induced variability. In agreement with model predictions, our analysis shows that the wings of the studied Fraunhofer lines are limb brightened with respect to the (quasi-)continuum. The strength of the CLV-induced effect can be on the same order as signals found for hot Jupiter atmospheres. Therefore, a careful treatment of the wavelength dependence of the stellar CLV in strong absorption lines is highly relevant in the interpretation of planetary transit spectroscopy.
## Wednesday, October 28, 2015
### Did the Solar System Originally Form as a Compact, 5 Gas Giant Planet System?
Tilting Saturn without tilting Jupiter: Constraints on giant planet migration
Authors:
Brasser et al
Abstract:
The migration and encounter histories of the giant planets in our Solar System can be constrained by the obliquities of Jupiter and Saturn. We have performed secular simulations with imposed migration and N-body simulations with planetesimals to study the expected obliquity distribution of migrating planets with initial conditions resembling those of the smooth migration model, the resonant Nice model and two models with five giant planets initially in resonance (one compact and one loose configuration). For smooth migration, the secular spin-orbit resonance mechanism can tilt Saturn's spin axis to the current obliquity if the product of the migration time scale and the orbital inclinations is sufficiently large (exceeding 30 Myr deg). For the resonant Nice model with imposed migration, it is difficult to reproduce today's obliquity values, because the compactness of the initial system raises the frequency that tilts Saturn above the spin precession frequency of Jupiter, causing a Jupiter spin-orbit resonance crossing. Migration time scales sufficiently long to tilt Saturn generally suffice to tilt Jupiter more than is observed. The full N-body simulations tell a somewhat different story, with Jupiter generally being tilted as often as Saturn, but on average having a higher obliquity. The main obstacle is the final orbital spacing of the giant planets, coupled with the tail of Neptune's migration. The resonant Nice case is barely able to simultaneously reproduce the {orbital and spin} properties of the giant planets, with a probability ~0.15%. The loose five planet model is unable to match all our constraints (probability less than 0.08%). The compact five planet model has the highest chance of matching the orbital and obliquity constraints simultaneously (probability ~0.3%).
### Lithium Depletion may Signal Exoplanetary System Presence
Accretion of planetary matter and the lithium problem in the 16 Cygni stellar system
Authors:
Deal et al
Abstract:
The 16 Cyg system is composed of two solar analogs with similar masses and ages. A red dwarf is in orbit around 16 Cyg A whereas 16 Cyg B hosts a giant planet. The abundances of heavy elements are similar in the two stars but lithium is much more depleted in 16 Cyg B that in 16 Cyg A, by a factor of at least 4.7. The interest of studying the 16 Cyg system is that the two star have the same age and the same initial composition. The presently observed differences must be due to their different evolution, related to the fact that one of them hosts a planet contrary to the other one. We computed models of the two stars which precisely fit the observed seismic frequencies. We used the Toulouse Geneva Evolution Code (TGEC) that includes complete atomic diffusion (including radiative accelerations). We compared the predicted surface abundances with the spectroscopic observations and confirmed that another mixing process is needed. We then included the effect of accretion-induced fingering convection. The accretion of planetary matter does not change the metal abundances but leads to lithium destruction which depends on the accreted mass. A fraction of earth mass is enough to explain the lithium surface abundances of 16 Cyg B. We also checked the beryllium abundances. In the case of accretion of heavy matter onto stellar surfaces, the accreted heavy elements do not remain in the outer convective zones but they are mixed downwards by fingering convection induced by the unstable μ-gradient. Depending on the accreted mass, this mixing process may transport lithium down to its nuclear destruction layers and lead to an extra lithium depletion at the surface. A fraction of earth mass is enough to explain a lithium ratio of 4.7 in the 16 Cyg system. In this case beryllium is not destroyed. Such a process may be frequent in planet host stars and should be studied in other cases in the future.
### Gas Giants in the HL Tau Protoplanetary Disk
Hunting for planets in the HL Tau disk
Authors:
Testi et al
Abstract:
Recent ALMA images of HL Tau show gaps in the dusty disk that may be caused by planetary bodies. Given the young age of this system, if confirmed, this finding would imply very short timescales for planet formation, probably in a gravitationally unstable disk. To test this scenario, we searched for young planets by means of direct imaging in the L'-band using the Large Binocular Telescope Interferometer mid-infrared camera. At the location of two prominent dips in the dust distribution at ~70AU (~0.5") from the central star we reach a contrast level of ~7.5mag. We did not detect any point source at the location of the rings. Using evolutionary models we derive upper limits of ~10-15MJup at less than or equal to 0.5-1Ma for the possible planets. With these sensitivity limits we should have been able to detect companions sufficiently massive to open full gaps in the disk. The structures detected at mm-wavelengths could be gaps in the distributions of large grains on the disk midplane, caused by planets not massive enough to fully open gaps. Future ALMA observations of the molecular gas density profile and kinematics as well as higher contrast infrared observations may be able to provide a definitive answer.
## Tuesday, October 27, 2015
### Refined Characteristics of Jupiter Analog 51 Eridani b
Astrometric Confirmation and Preliminary Orbital Parameters of the Young Exoplanet 51 Eridani b with the Gemini Planet Imager
Authors:
De Rosa et al
Abstract:
We present new GPI observations of the young exoplanet 51 Eridani b which provide further evidence that the companion is physically associated with 51 Eridani. Combining this new astrometric measurement with those reported in the literature, we significantly reduce the posterior probability that 51 Eridani b is an unbound foreground or background T-dwarf in a chance alignment with 51 Eridani to 2×10−7, an order of magnitude lower than previously reported. If 51 Eridani b is indeed a bound object, then we have detected orbital motion of the planet between the discovery epoch and the latest epoch. By implementing a computationally efficient Monte Carlo technique, preliminary constraints are placed on the orbital parameters of the system. The current set of astrometric measurements suggest an orbital semi-major axis of 14+7−3 AU, corresponding to a period of 41+35−12 yr (assuming a mass of 1.75 M⊙ for the central star), and an inclination of 138+15−13 deg. The remaining orbital elements are only marginally constrained by the current measurements. These preliminary values suggest an orbit which does not share the same inclination as the orbit of the distant M-dwarf binary, GJ 3305, which is a wide physically bound companion to 51 Eridani.
### Tatooine Nurseries
Tatooine Nurseries: Structure and Evolution of Circumbinary Protoplanetary Disks
Authors:
Vartanyan et al
Abstract:
Recent discoveries of circumbinary planets by Kepler mission provide motivation for understanding their birthplaces - protoplanetary disks around stellar binaries with separations less than 1 AU. We explore properties and evolution of such circumbinary disks focusing on modification of their structure caused by tidal coupling to the binary. We develop a set of analytical scaling relations describing viscous evolution of the disk properties, which are verified and calibrated using 1D numerical calculations with realistic inputs. Injection of angular momentum by the central binary suppresses mass accretion onto the binary and causes radial distribution of the viscous angular momentum flux F_J to be different from that in a standard accretion disk around a single star with no torque at the center. Disks with no mass accretion at the center develop F_J profile which is flat in radius. Radial profiles of temperature and surface density are also quite different from those in disks around single stars. Damping of the density waves driven by the binary and viscous dissipation dominate heating of the inner disk (within 1-2 AU), pushing the iceline beyond 3-5 AU, depending on disk mass and age. Irradiation by the binary governs disk thermodynamics beyond ~10 AU. However, self-shadowing by the hot inner disk may render central illumination irrelevant out to ~20 AU. Spectral energy distribution of a circumbinary disk exhibits a distinctive bump around 10 micron, which may facilitate identification of such disks around unresolved binaries. Efficient tidal coupling to the disk drives orbital inspiral of the binary and may cause low-mass and compact binaries to merge into a single star within the disk lifetime. We generally find that circumbinary disks present favorable sites for planet formation (despite wider zone of volatile depletion), in agreement with the statistics of Kepler circumbinary planets.
### The Lack of Kozai-Lidov Cycles for Most Circumbinary Exoplanets
Kozai-Lidov cycles towards the limit of circumbinary planets
Authors:
Martin et al
Abstract:
In this paper we answer a simple question: can a misaligned circumbinary planet induce Kozai-Lidov cycles on an inner stellar binary? We use known analytic equations to analyse the behaviour of the Kozai-Lidov effect as the outer mass is made small. We demonstrate a significant departure from the traditional symmetry, critical angles and amplitude of the effect. Aside from massive planets on near-polar orbits, circumbinary planetary systems are devoid of Kozai-Lidov cycles. This has positive implications for the existence of highly misaligned circumbinary planets: an observationally unexplored and theoretically important parameter space.
## Monday, October 26, 2015
### Modeling the Atmospheres of Exoplanets Around Different Host Stars at Different Temperatures
Model atmospheres of irradiated exoplanets: The influence of stellar parameters, metallicity, and the C/O ratio
Authors:
Mollière et al
Abstract:
Many parameters constraining the spectral appearance of exoplanets are still poorly understood. We therefore study the properties of irradiated exoplanet atmospheres over a wide parameter range including metallicity, C/O ratio and host spectral type. We calculate a grid of 1-d radiative-convective atmospheres and emission spectra. We perform the calculations with our new Pressure-Temperature Iterator and Spectral Emission Calculator for Planetary Atmospheres (PETIT) code, assuming chemical equilibrium. The atmospheric structures and spectra are made available online. We find that atmospheres of planets with C/O ratios ∼ 1 and Teff ≳ 1500 K can exhibit inversions due to heating by the alkalis because the main coolants CH4, H2O and HCN are depleted. Therefore, temperature inversions possibly occur without the presence of additional absorbers like TiO and VO. At low temperatures we find that the pressure level of the photosphere strongly influences whether the atmospheric opacity is dominated by either water (for low C/O) or methane (for high C/O), or both (regardless of the C/O). For hot, carbon-rich objects this pressure level governs whether the atmosphere is dominated by methane or HCN. Further we find that host stars of late spectral type lead to planetary atmospheres which have shallower, more isothermal temperature profiles. In agreement with prior work we find that for planets with Teff less than 1750 K the transition between water or methane dominated spectra occurs at C/O ∼ 0.7, instead of ∼ 1, because condensation preferentially removes oxygen.
### Problems With Detecting ExoEarths With the Proposed High Definition Space Telescope
Issues with the High Definition Space Telescope (HDST) ExoEarth Biosignature Case: A Critique of the 2015 AURA Report "From Cosmic Birth to Living Earths: the future of UVOIR Astronomy"
Author:
Elvis
Abstract:
"From Cosmic Birth to Living Earths" advocates a 12-meter optical/near-IR space telescope for launch ~2035. The goal that sets this large size is the detection of biosignatures from Earth-like planets in their habitable zones around G-stars. The discovery of a single instance of life elsewhere in the universe would be a profound event for humanity. But not at any cost. At 8-9B USD this High Definition Space Telescope (HDST) would take all the NASA astrophysics budget for nearly 20 years, unless new funds are found. For a generation NASA could build no "Greater Observatories" matching JWST in the rest of the spectrum. This opportunity cost prompted me to study the driving exobiosphere detection case for HDST. I find that: (1) the focus on G-stars is not well justified; (2) only G-stars require the use of direct imaging; (3) in the chosen 0.5 - 2.5 micron band, the available biosignatures are ambiguous and a larger sample does not help; (4) the expected number of exobiospheres is 1, with a 5% chance of zero; (5) the accessible sample size is too small to show that exobiospheres are rare; (6) a sufficiently large sample would require a much larger telescope; (7) the great progress in M-star planet spectroscopy - both now and with new techniques, instruments and telescopes already planned - means that a biosignature will likely be found before HDST could complete its search in ~2045. For all these reasons I regretfully conclude that HDST, while commendably ambitious, is not the right choice for NASA Astrophysics at this time. The first exobiosphere discovery is likely to be such a major event that scientific and public pressure will produce new funding across a range of disciplines, not just astrophysics, to study the nature of Life in the Universe. Then will be the time when a broader science community can advocate for a mission that will make definitive exobiosphere measurements.
### Terrestrial-type ExoPlanet Formation: Comparing Different Types of Initial Conditions
Terrestrial-type planet formation: Comparing different types of initial conditions
Authors:
Ronco et al
Abstract:
To study the terrestrial-type planet formation during the post oligarchic growth, the initial distributions of planetary embryos and planetesimals used in N-body simulations play an important role. Most of these studies typically use ad hoc initial distributions based on theoretical and numerical studies. We analyze the formation of planetary systems without gas giants around solar-type stars focusing on the sensitivity of the results to the particular initial distributions of planetesimals and embryos. The formation of terrestrial planets in the habitable zone (HZ) and their final water contents are topics of interest. We developed two different sets of N-body simulations from the same protoplanetary disk. The first set assumes ad hoc initial distributions for embryos and planetesimals and the second set obtains these distributions from the results of a semi-analytical model which simulates the evolution of the gaseous phase of the disk. Both sets form planets in the HZ. Ad hoc initial conditions form planets in the HZ with masses from 0.66M⊕ to 2.27M⊕. More realistic initial conditions obtained from a semi-analytical model, form planets with masses between 1.18M⊕ and 2.21M⊕. Both sets form planets in the HZ with water contents between 4.5% and 39.48% by mass. Those planets with the highest water contents respect to those with the lowest, present differences regarding the sources of water supply. We suggest that the number of planets in the HZ is not sensitive to the particular initial distribution of embryos and planetesimals and thus, the results are globally similar between both sets. However, the main differences are associated to the accretion history of the planets in the HZ. These discrepancies have a direct impact in the accretion of water-rich material and in the physical characteristics of the resulting planets.
## Sunday, October 25, 2015
### No Keplerian Disk >10 AU around the Protostar B335: Magnetic Braking or Young Age?
No Keplerian Disk greater than 10 AU around the Protostar B335: Magnetic Braking or Young Age?
Authors:
Yen et al
Abstract:
We have conducted ALMA cycle 2 observations in the 1.3 mm continuum and in the C18O (2-1) and SO (5_6-4_5) lines at a resolution of ~0.3" toward the Class 0 protostar B335. The 1.3 mm continuum, C18O, and SO emission all show central compact components with sizes of ~40-180 AU within more extended components. The C18O component shows signs of infalling and rotational motion. By fitting simple kinematic models to the C18O data, the protostellar mass is estimated to be 0.05 Msun. The specific angular momentum, on a 100 AU scale, is ~4.3E-5 km/s*pc. A similar specific angular momentum, ~3E-5 to 5E-5 km/s*pc, is measured on a 10 AU scale from the velocity gradient observed in the central SO component, and there is no clear sign of an infalling motion in the SO emission. By comparing the infalling and rotational motion, our ALMA results suggest that the observed rotational motion has not yet reached Keplerian velocity neither on a 100 AU nor even on a 10 AU scale. Consequently, the radius of the Keplerian disk in B335 (if present) is expected to be 1-3 AU. The expected disk radius in B335 is one to two orders of magnitude smaller than those of observed Keplerian disks around other Class 0 protostars. Based on the observed infalling and rotational motion from 0.1 pc to inner 100 AU scales, there are two possible scenarios to explain the presence of such a small Keplerian disk in B335: magnetic braking and young age. If our finding is the consequence of magnetic braking, ~50% of the angular momentum of the infalling material within a 1000 AU scale might have been removed, and the magnetic field strength on a 1000 AU scale is estimated to be ~200 uG. If it is young age, the infalling radius in B335 is estimated to be ~2700 AU, corresponding to a collapsing time scale of ~5E4 yr.
### New Members of the TW Hya Association and 2 Accreting M Dwarfs in Sco-Cen
New members of the TW Hydrae Association and two accreting M-dwarfs in Scorpius–Centaurus
Authors:
Murphy et al
Abstract:
We report the serendipitous discovery of several young mid-M stars found during a search for new members of the 30–40 Myr-old Octans Association. Only one of the stars may be considered a possible Octans(-Near) member. However, two stars have proper motions, kinematic distances, radial velocities, photometry and Li i λ6708 measurements consistent with membership in the 8–10 Myr-old TW Hydrae Association. Another may be an outlying member of TW Hydrae but has a velocity similar to that predicted by membership in Octans. We also identify two new lithium-rich members of the neighbouring Scorpius–Centaurus OB Association (Sco–Cen). Both exhibit large 12 and 22 μm excesses and strong, variable Hα emission which we attribute to accretion from circumstellar discs. Such stars are thought to be incredibly rare at the ∼16 Myr median age of Sco–Cen and they join only one other confirmed M-type and three higher mass accretors outside of Upper Scorpius. The serendipitous discovery of two accreting stars hosting large quantities of circumstellar material may be indicative of a sizeable age spread in Sco–Cen, or further evidence that disc dispersal and planet formation time-scales are longer around lower mass stars. To aid future studies of Sco–Cen, we also provide a newly compiled catalogue of 305 early-type Hipparcos members with spectroscopic radial velocities sourced from the literature.
### M Dwarfs Found in TW Hydrae Association With Circumstellar Disks
An ALMA Survey for Disks Orbiting Low-Mass Stars in the TW Hya Association
Authors:
Rodriguez et al
Abstract:
We have carried out an ALMA survey of 15 confirmed or candidate low-mass (less than 0.2M⊙) members of the TW Hya Association (TWA) with the goal of detecting molecular gas in the form of CO emission, as well as providing constraints on continuum emission due to cold dust. Our targets have spectral types of M4-L0 and hence represent the extreme low end of the TWA's mass function. Our ALMA survey has yielded detections of 1.3mm continuum emission around 4 systems (TWA 30B, 32, 33, & 34), suggesting the presence of cold dust grains. All continuum sources are unresolved. TWA 34 further shows 12CO(2-1) emission whose velocity structure is indicative of Keplerian rotation. Among the sample of known ~7-10 Myr-old star/disk systems, TWA 34, which lies just ~50 pc from Earth, is the lowest mass star thus far identified as harboring cold molecular gas in an orbiting disk.
## Saturday, October 24, 2015
### Large Dust Gaps in the Transitional Disks of HD 100453 and HD 34282
Large dust gaps in the transitional disks of HD 100453 and HD 34282
Authors:
Abstract:
The formation of dust gaps in protoplanetary disks is one of the most important signposts of disk evolution and possibly the formation of planets. We aim to characterize the 'flaring' disk structure around the Herbig Ae/Be stars HD 100453 and HD 34282. Their spectral energy distributions (SEDs) show an emission excess between 15-40{\mu}m, but very weak (HD 100453) and no (HD 34282) signs of the 10 and 20 {\mu}m amorphous silicate features. We investigate whether this implies the presence of large dust gaps. In this work, spatially resolved mid-infrared Q-band images taken with Gemini North/MICHELLE are investigated. We perform radiative transfer modeling and examine the radial distribution of dust. We simultaneously fit the Q-band images and SEDs of HD 100453 and HD 34282. Our solutions require that the inner-halos and outer-disks are likely separated by large dust gaps that are depleted wih respect to the outer disk by a factor of 1000 or more. The inner edges of the outer disks of HD 100453 and HD 34282 have temperatures of about 160±10 K and 60±5 K respectively. Because of the high surface brightnesses of these walls, they dominate the emission in the Q-band. Their radii are constrained at 20+2 AU and 92+31 AU, respectively. We conclude that, HD 100453 and HD 34282 likely have disk dust gaps and the upper limit on the dust mass in each gap is estimated to be about 10−7M⊙. We find that the locations and sizes of disk dust gaps are connected to the SED, as traced by the mid-infrared flux ratio F30/F13.5. We propose a new classification scheme for the Meeus groups (Meeus et al. 2001) based on the F30/F13.5 ratio. The absence of amorphous silicate features in the observed SEDs is caused by the depletion of small (smaller than 1 {\mu}m) silicate dust at temperatures above 160 K, which could be related to the presence of a dust gap in that region of the disk.
### RW Aurigae & V409 Tau's Protoplanetary Disk Anomalies
First Results from the Disk Eclipse Search with KELT (DESK) Survey
Authors:
Rodrigruez et al
Abstract:
Using time-series photometry from the Kilodegree Extremely Little Telescope (KELT) exoplanet survey, we are looking for eclipses of stars by their protoplanetary disks, specifically in young stellar associations. To date, we have discovered two previously unknown, large dimming events around the young stars RW Aurigae and V409 Tau. We attribute the dimming of RW Aurigae to an occultation by its tidally disrupted disk, with the disruption perhaps resulting from a recent flyby of its binary companion. Even with the dynamical environment of RW Aurigae, the distorted disk material remains very compact and presumably capable of forming planets. This system also shows that strong binary interactions with disks can also influence planet and core composition by stirring up and mixing materials during planet formation. We interpret the dimming of V409 Tau to be due to a feature, possibly a warp or perturbation, lying at least 10 AU from the host star in its nearly edge-on circumstellar disk.
### Dust Grain Size/Stellar Luminosity Trend in Debris Disks
The dust grain size - stellar luminosity trend in debris discs
Authors:
Pawellek et al
Abstract:
The cross section of material in debris discs is thought to be dominated by the smallest grains that can still stay in bound orbits despite the repelling action of stellar radiation pressure. Thus the minimum (and typical) grain size smin is expected to be close to the radiation pressure blowout size sblow. Yet a recent analysis of a sample of Herschel-resolved debris discs showed the ratio smin/sblow to systematically decrease with the stellar luminosity from about ten for solar-type stars to nearly unity in the discs around the most luminous A-type stars. Here we explore this trend in more detail, checking how significant it is and seeking to find possible explanations. We show that the trend is robust to variation of the composition and porosity of dust particles. For any assumed grain properties and stellar parameters, we suggest a recipe of how to estimate the "true" radius of a spatially unresolved debris disc, based solely on its spectral energy distribution. The results of our collisional simulations are qualitatively consistent with the trend, although additional effects may also be at work. In particular, the lack of grains with small smin/sblow for lower luminosity stars might be caused by the grain surface energy constraint that should limit the size of the smallest collisional fragments. Also, a better agreement between the data and the collisional simulations is achieved when assuming debris discs of more luminous stars to have higher dynamical excitation than those of less luminous primaries. This would imply that protoplanetary discs of more massive young stars are more efficient in forming big planetesimals or planets that act as stirrers in the debris discs at the subsequent evolutionary stage.
## Friday, October 23, 2015
### The Structure of the Silicate Clouds of Luhman 16 A & B
Cloud Structure of the Nearest Brown Dwarfs II: High-amplitude variability for Luhman 16 A and B in and out of the 0.99 micron FeH feature
Authors:
Buenzli et al
Abstract:
The re-emergence of the 0.99 μm FeH feature in brown dwarfs of early- to mid-T spectral type has been suggested as evidence for cloud disruption where flux from deep, hot regions below the Fe cloud deck can emerge. The same mechanism could account for color changes at the L/T transition and photometric variability. We present the first observations of spectroscopic variability of brown dwarfs covering the 0.99 μm FeH feature. We observed the spatially resolved very nearby brown dwarf binary WISE J104915.57-531906.1 (Luhman 16AB), a late-L and early-T dwarf, with HST/WFC3 in the G102 grism at 0.8-1.15 μm. We find significant variability at all wavelengths for both brown dwarfs, with peak-to-valley amplitudes of 9.3% for Luhman 16B and 4.5% for Luhman 16A. This represents the first unambiguous detection of variability in Luhman 16A. We estimate a rotational period between 4.5 and 5.5 h, very similar to Luhman 16B. Variability in both components complicates the interpretation of spatially unresolved observations. The probability for finding large amplitude variability in any two brown dwarfs is less than 10%. Our finding may suggest that a common but yet unknown feature of the binary is important for the occurrence of variability. For both objects, the amplitude is nearly constant at all wavelengths except in the deep K I feature below 0.84 μm. No variations are seen across the 0.99 μm FeH feature. The observations lend strong further support to cloud height variations rather than holes in the silicate clouds, but cannot fully rule out holes in the iron clouds. We re-evaluate the diagnostic potential of the FeH feature as a tracer of cloud patchiness.
### A0V Star HR3549A has a Brown Dwarf Companion
Discovery of a low-mass companion around HR3549
Authors:
Mawet et al
Abstract:
We report the discovery of a low-mass companion to HR3549, an A0V star surrounded by a debris disk with a warm excess detected by WISE at 22 μm (10σ significance). We imaged HR3549 B in the L-band with NAOS-CONICA, the adaptive optics infrared camera of the Very Large Telescope, in January 2013 and confirmed its common proper motion in January 2015. The companion is at a projected separation of ≃80 AU and position angle of ≃157∘, so it is orbiting well beyond the warm disk inner edge of r>10 AU. Our age estimate for this system corresponds to a companion mass in the range 15-80 MJ, spanning the brown dwarf regime, and so HR3549 B is another recent addition to the growing list of brown dwarf desert objects with extreme mass ratios. The simultaneous presence of a warm disk and a brown dwarf around HR3549 provides interesting empirical constraints on models of the formation of substellar companions.
### 7 Brown Dwarfs Found in the rho Ophiuchus Cloud
Mapping the shores of the brown dwarf desert. IV. Ophiuchus
Authors:
Cheetham et al
Abstract:
We conduct a multiplicity survey of members of the rho Ophiuchus cloud complex with high resolution imaging to characterize the multiple star population of this nearby star forming region and investigate the relation between stellar multiplicity and star and planet formation. Our aperture masking survey reveals the presence of 5 new stellar companions beyond the reach of previous studies, but does not result in the detection of any new substellar companions. We find that 43+/-6% of the 114 stars in our survey have stellar mass companions between 1.3-780AU, while 7 (+8 -5)% host brown dwarf companions in the same interval. By combining this information with knowledge of disk-hosting stars, we show that the presence of a close binary companion (separation less than 40 AU) significantly influences the lifetime of protoplanetary disks, a phenomenon previously seen in older star forming regions. At the ~1-2Myr age of our Ophiuchus members ~2/3 of close binary systems have lost their disks, compared to only ~30% of single stars and wide binaries. This has significant impact on the formation of giant planets, which are expected to require much longer than 1 Myr to form via core accretion and thus planets formed via this pathway should be rare in close binary systems.
## Thursday, October 22, 2015
### Dust and Condensates in the Atmospheres of Hot Worlds and Comet-like Worlds
Tables of phase functions, opacities, albedos, equilibrium temperatures, and radiative accelerations of dust grains in exoplanets
Authors:
Budaj et al
Abstract:
There has been growing observational evidence for the presence of condensates in the atmospheres and/or comet-like tails of extrasolar planets. As a result, systematic and homogeneous tables of dust properties are useful in order to facilitate further observational and theoretical studies. In this paper we present calculations and analysis of non-isotropic phase functions, asymmetry parameter (mean cosine of the scattering angle), absorption and scattering opacities, single scattering albedos, equilibrium temperatures, and radiative accelerations of dust grains relevant for extrasolar planets. Our assumptions include spherical grain shape, Deirmendjian particle size distribution, and Mie theory. We consider several species: corundum/alumina, perovskite, olivines with 0 and 50 per cent iron content, pyroxenes with 0, 20, and 60 per cent iron content, pure iron, carbon at two different temperatures, water ice, liquid water, and ammonia. The presented tables cover the wavelength range of 0.2–500 μm and modal particle radii from 0.01 to 100 μm. Equilibrium temperatures and radiative accelerations assume irradiation by a non-blackbody source of light with temperatures from 7000 to 700 K seen at solid angles from 2π to 10−6 sr. The tables are provided to the community together with a simple code which allows for an optional, finite, angular dimension of the source of light (star) in the phase function.
### Hot Jupiter HAT-P-12b has no Clouds
Broad-band spectrophotometry of the hot Jupiter HAT-P-12b from the near-UV to the near-IR
Authors:
Mallonn et al
Abstract:
The detection of trends or gradients in the transmission spectrum of extrasolar planets is possible with observations at very low spectral resolution. Transit measurements of sufficient accuracy using selected broad-band filters allow for an initial characterization of the atmosphere of the planet. We obtained time series photometry of 20 transit events and analyzed them homogeneously, along with eight light curves obtained from the literature. In total, the light curves span a range from 0.35 to 1.25 microns. During two observing seasons over four months each, we monitored the host star to constrain the potential influence of starspots on the derived transit parameters. We rule out the presence of a Rayleigh slope extending over the entire optical wavelength range, a flat spectrum is favored for HAT-P-12b with respect to a cloud-free atmosphere model spectrum. A potential cause of such gray absorption is the presence of a cloud layer at the probed latitudes. Furthermore, in this work we refine the transit parameters, the ephemeris and perform a TTV analysis in which we found no indication for an unseen companion. The host star showed a mild non-periodic variability of up to 1%. However, no stellar rotation period could be detected to high confidence.
### WASP-47: A Compact Multiplanet System With a hot Jupiter and an Ultra-short Period ExoPlanet
A low stellar obliquity for WASP-47, a compact multiplanet system with a hot Jupiter and an ultra-short period planet
Authors:
Sanchis-Ojeda et al
Abstract:
We have detected the Rossiter-Mclaughlin effect during a transit of WASP-47b, the only known hot Jupiter with close planetary companions. By combining our spectroscopic observations with Kepler photometry, we show that the projected stellar obliquity is λ=0∘±24∘. We can firmly exclude a retrograde orbit for WASP-47b, and rule out strongly misaligned prograde orbits. Low obliquities have also been found for most of the other compact multiplanet systems that have been investigated. The Kepler-56 system, with two close-in gas giants transiting their subgiant host star with an obliquity of at least 45∘, remains the only clear counterexample.
## Wednesday, October 21, 2015
### White Dwarf Spotted Tearing Apart a Terrestrial Exoplanet, Consuming the Comet-like World
The Death Star of the movie Star Wars may be fictional, but planetary destruction is real. Astronomers announced today that they have spotted a large, rocky object disintegrating in its death spiral around a distant white dwarf star. The discovery also confirms a long-standing theory behind the source of white dwarf "pollution" by metals.
"This is something no human has seen before," says lead author Andrew Vanderburg of the Harvard-Smithsonian Center for Astrophysics (CfA). "We're watching a solar system get destroyed."
The evidence for this unique system came from NASA's Kepler K2 mission, which monitors stars for a dip in brightness that occurs when an orbiting body crosses the star. The data revealed a regular dip every 4.5 hours, which places the object in an orbit about 520,000 miles from the white dwarf (about twice the distance from the Earth to the Moon). It is the first planetary object to be seen transiting a white dwarf.
Vanderburg and his colleagues made additional observations using a number of ground-based facilities: the 1.2-meter and MINERVA telescopes at Whipple Observatory, the MMT, MEarth-South, and Keck.
Combining all the data, they found signs of several additional chunks of material, all in orbits between 4.5 and 5 hours. The main transit was particularly prominent, dimming the star by 40 percent. The transit signal also showed a comet-like pattern. Both features suggest the presence of an extended cloud of dust surrounding the fragment. The total amount of material is estimated to be about the mass of Ceres, a Texas-sized object that is the largest main-belt asteroid in our solar system.
The white dwarf star is located about 570 light-years from Earth in the constellation Virgo. When a Sun-like star reaches the end of its life, it swells into a red giant and sloughs off its outer layers. The hot, Earth-sized core that remains is a white dwarf star, and generally consists of carbon and oxygen with a thin hydrogen or helium shell.
Sometimes, though, astronomers find a white dwarf that shows signs of heavier elements like silicon and iron in its light spectrum. This is a mystery because a white dwarf's strong gravity should quickly submerge these metals.
"It's like panning for gold - the heavy stuff sinks to the bottom. These metals should sink into the white dwarf's interior where we can't see them," explains Harvard co-author John Johnson (CfA).
### Asteroids in the Jumping-Jupiter Migration Model
The evolution of asteroids in the jumping-Jupiter migration model
Authors:
Roig et al
Abstract:
In this work, we investigate the evolution of a primordial belt of asteroids, represented by a large number of massless test particles, under the gravitational effect of migrating Jovian planets in the framework of the jumping-Jupiter model. We perform several simulations considering test particles distributed in the Main Belt, as well as in the Hilda and Trojan groups. The simulations start with Jupiter and Saturn locked in the mutual 3:2 mean motion resonance plus 3 Neptune-mass planets in a compact orbital configuration. Mutual planetary interactions during migration led one of the Neptunes to be ejected in less than 10 Myr of evolution, causing Jupiter to jump by about 0.3 au in semi-major axis. This introduces a large scale instability in the studied populations of small bodies. After the migration phase, the simulations are extended over 4 Gyr, and we compare the final orbital structure of the simulated test particles to the current Main Belt of asteroids with absolute magnitude H less than 9.7. The results indicate that, in order to reproduce the present Main Belt, the primordial belt should have had a distribution peaked at ∼10∘ in inclination and at ∼0.1 in eccentricity. We discuss the implications of this for the Grand Tack model. The results also indicate that neither primordial Hildas, nor Trojans, survive the instability, confirming the idea that such populations must have been implanted from other sources. In particular, we address the possibility of implantation of Hildas and Trojans from the Main Belt population, but find that this contribution should be minor.
### Herbig Ae/Be star HD 100546's Disk is Being fed by its Gas Giant Exoplanets
High-resolution Br-gamma spectro-interferometry of the transitional Herbig Ae/Be star HD 100546: a Keplerian gaseous disc inside the inner rim
Authors:
Mendigutía et al
Abstract:
We present spatially and spectrally resolved Br-gamma emission around the planet-hosting, transitional Herbig Ae/Be star HD 100546. Aiming to gain insight into the physical origin of the line in possible relation to accretion processes, we carried out Br-gamma spectro-interferometry using AMBER/VLTI from three different baselines achieving spatial and spectral resolutions of 2-4 mas and 12000. The Br-gamma visibility is larger than that of the continuum for all baselines. Differential phases reveal a shift between the photocentre of the Br-gamma line -displaced 0.6 mas (0.06 au at 100 pc) NE from the star- and that of the K-band continuum emission -displaced 0.3 mas NE from the star. The photocentres of the redshifted and blueshifted components of the Br-gamma line are located NW and SE from the photocentre of the peak line emission, respectively. Moreover, the photocentre of the fastest velocity bins within the spectral line tends to be closer to that of the peak emission than the photocentre of the slowest velocity bins. Our results are consistent with a Br-gamma emitting region inside the dust inner rim (less than 0.25 au) and extending very close to the central star, with a Keplerian, disc-like structure rotating counter-clockwise, and most probably flared (25 deg). Even though the main contribution to the Br-gamma line does not come from gas magnetically channelled on to the star, accretion on to HD 100546 could be magnetospheric, implying a mass accretion rate of a few 10^-7 Msun/yr. This value indicates that the observed gas has to be replenished on time-scales of a few months to years, perhaps by planet-induced flows from the outer to the inner disc as has been reported for similar systems.
### Did Jupiter Eject a Neptune Sized Planet for our Solar System?
Could Jupiter or Saturn Have Ejected a Fifth Giant Planet?
Authors:
Cloutier et al
Abstract:
Models of the dynamical evolution of the early solar system following the dispersal of the gaseous protoplanetary disk have been widely successful in reconstructing the current orbital configuration of the giant planets. Statistically, some of the most successful dynamical evolution simulations have initially included a hypothetical fifth giant planet, of ice giant mass, which gets ejected by a gas giant during the early solar system's proposed instability phase. We investigate the likelihood of an ice giant ejection event by either Jupiter or Saturn through constraints imposed by the current orbits of their wide-separation regular satellites Callisto and Iapetus respectively. We show that planetary encounters that are sufficient to eject an ice giant, often provide excessive perturbations to the orbits of Callisto and Iapetus making it difficult to reconcile a planet ejection event with the current orbit of either satellite. Quantitatively, we compute the likelihood of reconciling a regular Jovian satellite orbit with the current orbit of Callisto following an ice giant ejection by Jupiter of ~ 42% and conclude that such a large likelihood supports the hypothesis of a fifth giant planet's existence. A similar calculation for Iapetus reveals that it is much more difficult for Saturn to have ejected an ice giant and reconcile a Kronian satellite orbit with that of Iapetus (likelihood ~ 1%), although uncertainties regarding the formation of Iapetus, on its unusual orbit, complicates the interpretation of this result.
### SEETI: Search for Extinct Extraterrestrial Intelligence
Archaeology has gone interstellar.
The peculiar behavior of KIC 8462852—a star 1,500 light-years from Earth that is prone to irregular dimming—has prompted widespread speculation on the Internet that it is host to an “alien megastructure,” perhaps a vast array of orbiting solar panels.
Scientists have pointed out various natural, non-alien phenomena that could be causing the stellar light show, but the SETI crowd isn’t taking any chances. Astronomers have begun using a radio telescope, the Allen Telescope Array, to detect possible signals in the vicinity of KIC 8462852.
But, the astronomers might be eavesdropping on a tomb.
For years, SETI researchers have argued that we can narrow our search for alien intelligence by looking for telltale signs of large, sophisticated structures built by advanced civilizations. They call this “cosmic archaeology.”
Yet, even if we were to find such artifacts, there’s no guarantee that the civilizations that created them are still around. Floating in space, abandoned for millennia, these objects could be the interstellar equivalent of the statues at Easter Island or the Egyptian Pyramids.
In fact, we might confront the morbid scenario that intelligent life periodically emerges on other worlds, but has an unfortunate tendency to self-destruct.
Sadly, it’s not implausible, given the devastation we’ve wrought during our relatively brief span as the dominant species on this planet.
That’s why a trio of scientists recently published a guide to help astronomers detect alien apocalypses—whether it’s the chemical signature of a world filled with rotting corpses, the radioactive aftermath of nuclear warfare, or the debris left over from a Death Star scenario where an entire planet gets blown to bits.
Call it SEETI, the Search for Extinct Extraterrestrial Intelligence.
## Tuesday, October 20, 2015
### 92% of Exo Earths Have yet to Form
Earth came early to the party in the evolving universe. According to a new theoretical study, when our solar system was born 4.6 billion years ago only eight percent of the potentially habitable planets that will ever form in the universe existed. And, the party won't be over when the sun burns out in another 6 billion years. The bulk of those planets -- 92 percent -- have yet to be born.
This conclusion is based on an assessment of data collected by NASA's Hubble Space Telescope and the prolific planet-hunting Kepler space observatory.
"Our main motivation was understanding the Earth's place in the context of the rest of the universe," said study author Peter Behroozi of the Space Telescope Science Institute (STScI) in Baltimore, Maryland, "Compared to all the planets that will ever form in the universe, the Earth is actually quite early."
Looking far away and far back in time, Hubble has given astronomers a "family album" of galaxy observations that chronicle the universe's star formation history as galaxies grew. The data show that the universe was making stars at a fast rate 10 billion years ago, but the fraction of the universe's hydrogen and helium gas that was involved was very low. Today, star birth is happening at a much slower rate than long ago, but there is so much leftover gas available that the universe will keep cooking up stars and planets for a very long time to come.
"There is enough remaining material [after the big bang] to produce even more planets in the future, in the Milky Way and beyond," added co-investigator Molly Peeples of STScI.
### Tidal Heating Could Dessicate Habitable Zone Exoplanets Around M Dwarfs
Tidal Heating of Earth-like Exoplanets around M Stars: Thermal, Magnetic, and Orbital Evolutions
Authors:
Driscoll et al
Abstract:
The internal thermal and magnetic evolution of rocky exoplanets is critical to their habitability. We focus on the thermal-orbital evolution of Earth-mass planets around low-mass M stars whose radiative habitable zone overlaps with the “tidal zone,” where tidal dissipation is expected to be a significant heat source in the interior. We develop a thermal-orbital evolution model calibrated to Earth that couples tidal dissipation, with a temperature-dependent Maxwell rheology, to orbital circularization and migration. We illustrate thermal-orbital steady states where surface heat flow is balanced by tidal dissipation and cooling can be stalled for billions of years until circularization occurs. Orbital energy dissipated as tidal heat in the interior drives both inward migration and circularization, with a circularization time that is inversely proportional to the dissipation rate. We identify a peak in the internal dissipation rate as the mantle passes through a viscoelastic state at mantle temperatures near 1800 K. Planets orbiting a 0.1 solar-mass star within 0.07 AU circularize before 10 Gyr, independent of initial eccentricity. Once circular, these planets cool monotonically and maintain dynamos similar to that of Earth. Planets forced into eccentric orbits can experience a super-cooling of the core and rapid core solidification, inhibiting dynamo action for planets in the habitable zone. We find that tidal heating is insignificant in the habitable zone around 0.45 (or larger) solar-mass stars because tidal dissipation is a stronger function of orbital distance than stellar mass, and the habitable zone is farther from larger stars. Suppression of the planetary magnetic field exposes the atmosphere to stellar wind erosion and the surface to harmful radiation. In addition to weak magnetic fields, massive melt eruption rates and prolonged magma oceans may render eccentric planets in the habitable zone of low-mass stars inhospitable for life.
### Giant Impacts are an Efficient Mechanism for Devolatilization of Super-Earths
Giant Impact: An Efficient Mechanism for Devolatilization of Super-Earths
Authors:
Liu et al
Abstract:
Mini-Neptunes and volatile-poor super-Earths coexist on adjacent orbits in proximity to host stars such as Kepler-36 and Kepler-11. Several post-formation processes have been proposed for explaining the origin of the compositional diversity: the mass loss via stellar XUV irradiation, degassing of accreted material, and in-situ accumulation of the disk gas. Close-in planets are also likely to experience giant impacts during the advanced stage of planet formation. This study examines the possibility of transforming volatile-rich super-Earths / mini-Neptunes into volatile-depleted super-Earths through giant impacts. We present the results of three-dimensional giant impact simulations in the accretionary and disruptive regimes. Target planets are modeled with a three-layered structure composed of an iron core, silicate mantle and hydrogen/helium envelope. In the disruptive case, the giant impact can remove most of the H/He atmosphere immediately and homogenize the refractory material in the planetary interior. In the accretionary case, the planet can retain more than half of the gaseous envelope, while a compositional gradient suppresses efficient heat transfer as its interior undergoes double-diffusive convection. After the giant impact, a hot and inflated planet cools and contracts slowly. The extended atmosphere enhances the mass loss via both a Parker wind induced by thermal pressure and hydrodynamic escape driven by the stellar XUV irradiation. As a result, the entire gaseous envelope is expected to be lost due to the combination of those processes in both cases. We propose that Kepler-36b may have been significantly devolatilized by giant impacts, while a substantial fraction of Kepler-36c's atmosphere may remain intact. Furthermore, the stochastic nature of giant impacts may account for the large dispersion in the mass--radius relationship of close-in super-Earths and mini-Neptunes.
### Alpha Centauri Bb is a False Positive
Ghost in the time series: no planet for Alpha Cen B
Authors:
Rajpaul et al
Abstract:
We re-analyse the publicly available radial velocity (RV) measurements for Alpha Cen B, a star hosting an Earth-mass planet candidate, Alpha Cen Bb, with 3.24 day orbital period. We demonstrate that the 3.24 d signal observed in the Alpha Cen B data almost certainly arises from the window function (time sampling) of the original data. We show that when stellar activity signals are removed from the RV variations, other significant peaks in the power spectrum of the window function are coincidentally suppressed, leaving behind a spurious yet apparently-significant 'ghost' of a signal that was present in the window function's power spectrum to begin with. Even when fitting synthetic data with time sampling identical to the original data, but devoid of any genuine periodicities close to that of the planet candidate, the original model used to infer the presence of Alpha Cen Bb leads to identical conclusions: viz., the 3σ detection of a half-a-metre-per-second signal with 3.236 day period. Our analysis underscores the difficulty of detecting weak planetary signals in RV data, and the importance of understanding in detail how every component of an RV data set, including its time sampling, influences final statistical inference.
## Monday, October 19, 2015
### GJ1214b's Atmospheric Mixing Driven by Anti-Hadley Circulation
3D modeling of GJ1214b's atmosphere: vertical mixing driven by an anti-Hadley circulation
Authors:
Charnay et al
Abstract:
GJ1214b is a warm sub-Neptune transiting in front of a nearby M dwarf star. Recent observations indicate the presence of high and thick clouds or haze whose presence requires strong atmospheric mixing. In order to understand the transport and distribution of such clouds/haze, we study the atmospheric circulation and the vertical mixing of GJ1214b with a 3D General Circulation Model for cloud-free hydrogen-dominated atmospheres (metallicity of 1, 10 and 100 times the solar value) and for a water-dominated atmosphere. We analyze the effect of the atmospheric metallicity on the thermal structure and zonal winds. We also analyze the zonal mean meridional circulation and show that it corresponds to an anti-Hadley circulation in most of the atmosphere with upwelling at mid-latitude and downwelling at the equator in average. This circulation must be present on a large range of synchronously rotating exoplanets with strong impact on cloud formation and distribution. Using simple tracers, we show that vertical winds on GJ1214b can be strong enough to loft micrometric particles and that the anti-Hadley circulation leads to a minimum of tracers at the equator. We find that the strength of the vertical mixing increases with metallicity. We derive 1D equivalent eddy diffusion coefficients and find simple parametrizations from Kzz=7x10^2xP_{bar}^{-0.4} m^2/s for solar metallicity to Kzz=3x10^3xP_{bar}^{-0.4} m^2/s for the 100xsolar metallicity. These values should favor an efficient formation of photochemical haze in the upper atmosphere of GJ1214b.
### Gravitational Microlensing Events as a Target for SETI project
Gravitational Microlensing Events as a Target for SETI project
Author:
Rahvar
Abstract:
Detection of signals from a possible extrasolar technological civilization is one of the challenging efforts of science. In this work, we propose using natural telescopes made of single or binary gravitational lensing systems to magnify leakage of electromagnetic signals from a remote planet harbours an Extra Terrestrial Intelligent (ETI) technology. The gravitational microlensing surveys are monitoring a large area of Galactic bulge for searching microlensing events and each year they find more than 2000 events. These lenses are capable of playing the role of natural telescopes and in some occasions they can magnify signals from planets orbiting around the source stars in the gravitational microlensing systems. Assuming that frequency of electromagnetic waves used for telecommunication in ETIs is similar to ours, we propose follow-up observation of microlensing events with radio telescopes such as Square Kilometre Array (SKA), Low Frequency Demonstrators (LFD) and Mileura Wide-Field Array (MWA). Amplifying signals from leakage of broadcasting of Earth-like civilizations will allow us to detect them up to the center of Milky Way galaxy. Our analysis shows that in binary microlensing systems, the probability of amplification of signals from ETIs is more likely than that in the single microlensing events. Finally we propose a practical observational strategy with the follow-up observation of binary microlensing events with the SKA as a new program for searching ETIs. The probability of detection in the optimistic values for the factors of Drake equation is around one event per year.
### Kardashev Type III Civilizations are NOT Present in the Local Universe
Application of the mid-IR radio correlation to the Ĝ sample and the search for advanced extraterrestrial civilisations
Author:
Garrett
Abstract:
Wright et al. (2014, ApJ, 792, 26) have embarked on a search for advanced Karadashev Type III civilisations via the compilation of a sample of sources with extreme mid-IR emission and colours. The aim is to furnish a list of candidate galaxies that might harbour an advanced Kardashev Type III civilisation; in this scenario, the mid-IR emission is then primarily associated with waste heat energy by-products. I apply the mid-IR radio correlation to this Glimpsing Heat from Alien Technology (Ĝ) sample, a catalogue of 93 candidate galaxies compiled by Griffith et al. (2015, ApJS, 217, 25). I demonstrate that the mid-IR and radio luminosities are correlated for the sample, determining a k-corrected value of q22 = 1.35 ± 0.42. By comparison, a similar measurement for 124 galaxies drawn from the First Look Survey (FLS) has q22 = 0.87 ± 0.27. The statistically significant difference of the mean value of q22 for these two samples, taken together with their more comparable far-IR properties, suggests that the Ĝ sample shows excessive emission in the mid-IR. The fact that the Ĝ sample largely follows the mid-IR radio correlation strongly suggests that the vast majority of these sources are associated with galaxies in which natural astrophysical processes are dominant. This simple application of the mid-IR radio correlation can substantially reduce the number of false positives in the Ĝ catalogue since galaxies occupied by advanced Kardashev Type III civilisations would be expected to exhibit very high values of q. I identify nine outliers in the sample with q22> 2 of which at least three have properties that are relatively well explained via standard astrophysical interpretations e.g. dust emission associated with nascent star formation and/or nuclear activity from a heavily obscured AGN. The other outliers have not been studied in any great detail, and are deserving of further observation. I also note that the comparison of resolved mid-IR and radio images of galaxies on sub-galactic (kpc) scales can also be useful in identifying and recognising artificial mid-IR emission from less advanced intermediate Type II/III civilisations. Nevertheless, from the bulk properties of the Ĝ sample, I conclude that Kardashev Type III civilisations are either very rare or do not exist in the local Universe.
## Sunday, October 18, 2015
### Exposure-based Algorithm for Removing Systematics out of the CoRoT Light Curves
Exposure-based Algorithm for Removing Systematics out of the CoRoT Light Curves
Authors:
Guterman et al
Abstract:
The CoRoT space mission was operating for almost 6 years, producing thousands of continuous photometric light curves. The temporal series of exposures are processed by the production pipeline, correcting the data for known instrumental effects. But even after these model-based corrections, some collective trends are still visible in the light curves. We propose here a simple exposure-based algorithm to remove instrumental effects. The effect of each exposure is a function of only two instrumental stellar parameters, position on the CCD and photometric aperture. The effect is not a function of the stellar flux, and therefore much more robust. As an example, we show that the ∼2% long-term variation of the early run LRc01 is nicely detrended on average. This systematics removal process is part of the CoRoT legacy data pipeline.
### No Sign O & B Class Stars Destroy Their Protoplanetary Disks
NO EVIDENCE FOR PROTOPLANETARY DISK DESTRUCTION BY OB STARS IN THE MYStIX SAMPLE
Authors:
Richert et al
Abstract:
Hubble Space Telescope images of proplyds in the Orion Nebula, as well as submillimeter/radio measurements, show that the dominant O7 star ${\theta }^{1}$Ori C photoevaporates nearby disks around pre-main-sequence stars. Theory predicts that massive stars photoevaporate disks within distances of the order of 0.1 pc. These findings suggest that young, OB-dominated massive H ii regions are inhospitable to the survival of protoplanetary disks and, subsequently, to the formation and evolution of planets. In the current work, we test this hypothesis using large samples of pre-main-sequence stars in 20 massive star-forming regions selected with X-ray and infrared photometry in the MYStIX survey. Complete disk destruction would lead to a deficit of cluster members with an excess in JHKS and Spitzer/IRAC bands in the vicinity of O stars. In four MYStIX regions containing O stars and a sufficient surface density of disk-bearing sources to reliably test for spatial avoidance, we find no evidence for the depletion of inner disks around pre-main-sequence stars in the vicinity of O-type stars, even very luminous O2?O5 stars. These results suggest that massive star-forming regions are not very hostile to the survival of protoplanetary disks and, presumably, to the formation of planets.
### Particle trapping in Transition Disks
Testing particle trapping in transition disks with ALMA
Authors:
Pinilla et al
Abstract:
We present new ALMA continuum observations at 336 GHz of two transition disks, SR 21 and HD 135344B. In combination with previous ALMA observations from Cycle 0 at 689 GHz, we compare the visibility profiles at the two frequencies and calculate the spectral index (αmm). The observations of SR 21 show a clear shift in the visibility nulls, indicating radial variations of the inner edge of the cavity at the two wavelengths. Notable radial variations of the spectral index are also detected for SR 21 with values of αmm∼3.8−4.2 in the inner region (r≲35 AU) and αmm∼2.6−3.0 outside. An axisymmetric ring ("ring model") or a ring with the addition of an azimuthal Gaussian profile, for mimicking a vortex structure ("vortex model"), is assumed for fitting the disk morphology. For SR 21, the ring model better fits the emission at 336 GHz, conversely the vortex model better fits the 689 GHz emission. For HD 135344B, neither a significant shift in the null of the visibilities nor radial variations of αmm are detected. Furthermore, for HD 135344B, the vortex model fits both frequencies better than the ring model. However, the azimuthal extent of the vortex increases with wavelength, contrary to model predictions for particle trapping by anticyclonic vortices. For both disks, the azimuthal variations of αmm remain uncertain to confirm azimuthal trapping. The comparison of the current data with a generic model of dust evolution that includes planet-disk interaction suggests that particles in the outer disk of SR 21 have grown to millimetre sizes and have accumulated in a radial pressure bump, whereas with the current resolution there is not clear evidence of radial trapping in HD 135344B, although it cannot be excluded either.
## Saturday, October 17, 2015
### Does Planetary Formation Help Spin Down Their Protostars?
Protostellar spin-down: a planetary lift?
Authors:
Bouvier et al
Abstract:
When they first appear in the HR diagram, young stars rotate at a mere 10 per cent of their break-up velocity. They must have lost most of the angular momentum initially contained in the parental cloud, the so-called angular momentum problem. We investigate here a new mechanism by which large amounts of angular momentum might be shed from young stellar systems, thus yielding slowly rotating young stars. Assuming that planets promptly form in circumstellar discs and rapidly migrate close to the central star, we investigate how the tidal and magnetic interactions between the protostar, its close-in planet(s), and the inner circumstellar disc can efficiently remove angular momentum from the central object. We find that neither the tidal torque nor the variety of magnetic torques acting between the star and the embedded planet are able to counteract the spin-up torques due to accretion and contraction. Indeed, the former are orders of magnitude weaker than the latter beyond the corotation radius and are thus unable to prevent the young star from spinning up. We conclude that star–planet interaction in the early phases of stellar evolution does not appear as a viable alternative to magnetic star–disc coupling to understand the origin of the low angular momentum content of young stars.
### Cool and Luminous Transients from Mass-Losing Binary Stars
Cool and Luminous Transients from Mass-Losing Binary Stars
Authors:
Pejcha et al
Abstract:
We study transients produced by equatorial disk-like outflows from catastrophically mass-losing binary stars with an asymptotic velocity and energy deposition rate near the inner edge which are proportional to the binary escape velocity v_esc. As a test case, we present the first smoothed-particle radiation-hydrodynamics calculations of the mass loss from the outer Lagrange point with realistic equation of state and opacities. The resulting spiral stream becomes unbound for binary mass ratios 0.06 less than q less than 0.8. For synchronous binaries with non-degenerate components, the spiral-stream arms merge at a radius of ~10a, where a is the binary semi-major axis, and the accompanying shock thermalizes 10-20% of the kinetic power of the outflow. The mass-losing binary outflows produce luminosities proportional to the mass loss rate and v_esc, reaching up to ~10^6 L_Sun. The effective temperatures depend primarily on v_esc and span 500 less than T_eff less than 6000 K. Dust readily forms in the outflow, potentially in a catastrophic global cooling transition. The appearance of the transient is viewing angle-dependent due to vastly different optical depths parallel and perpendicular to the binary plane. The predicted peak luminosities, timescales, and effective temperatures of mass-losing binaries are compatible with those of many of the class of recently-discovered red transients such as V838 Mon and V1309 Sco. We predict a correlation between the peak luminosity and the outflow velocity, which is roughly obeyed by the known red transients. Outflows from mass-losing binaries can produce luminous (10^5 L_Sun) and cool (T_eff less than 1500 K) transients lasting a year or longer, as has potentially been detected by Spitzer surveys of nearby galaxies.
### Impact of the Initial Disk Mass Function on the Disk Fraction
Impact of the initial disk mass function on the disk fraction
Authors:
Oshawa et al
Abstract:
The disk fraction, the percentage of stars with disks in a young cluster, is widely used to investigate the lifetime of the protoplanetary disk, which can impose an important constraint on the planet formation mechanism. The relationship between the decay timescale of the disk fraction and the mass dissipation timescale of an individual disk, however, remains unclear. Here we investigate the effect of the disk mass function (DMF) on the evolution of the disk fraction. We show that the time variation in the disk fraction depends on the spread of the DMF and the detection threshold of the disk. In general, the disk fraction decreases more slowly than the disk mass if a typical initial DMF and a detection threshold are assumed. We find that, if the disk mass decreases exponentially, {the mass dissipation timescale of the disk} can be as short as 1Myr even when the disk fraction decreases with the time constant of ∼2.5Myr. The decay timescale of the disk fraction can be an useful parameter to investigate the disk lifetime, but the difference between the mass dissipation of an individual disk and the decrease in the disk fraction should be properly appreciated to estimate the timescale of the disk mass dissipation.
## Friday, October 16, 2015
### Discovery of Brown Dwarfs in Rho Ophiuchi's Dark Cloud L 1688
Discovery of Young Methane Dwarfs in the Rho Ophiuchi L 1688 Dark Cloud
Authors:
Chiang et al
Abstract:
We report the discovery of two methane dwarfs in the dark cloud L 1688 of the Rho Oph star-forming region. The two objects were among the T dwarf candidates with possible methane absorption and cool atmospheres, as diagnosed by infrared colors using deep WIRCam/CFHT HK plus CH4ON images, and IRAC/Spitzer c2d data. Follow-up spectroscopic observations with the FLAMINGOS-2/Gemini South confirmed the methane absorption at 1.6 micron. Compared with spectral templates of known T dwarfs in the field, i.e., of the old populations, Oph J162738-245240 (Oph-T3) is a T0/T1 type, whereas Oph J162645-241949 (Oph-T17) is consistent with a T3/T4 type in the H band but an L8/T1 in the K band. Compared with the BT-Settl model, both Oph-T3 and Oph-T17 are consistent with being cool, ~ 1000 K and ~ 900 K, respectively, and of low surface gravity, log(g) = 3.5. With an age no more than a couple Myr, these two methane dwarfs thereby represent the youngest T dwarfs ever confirmed. A young late L dwarf, OphJ162651-242110, was found serendipitously in our spectroscopic observations.
### Nearby Brown Dwarf WISEP J180026.60+013453.1's Properties
Properties of the Nearby Brown Dwarf WISEP J180026.60+013453.1
Authors:
Gizis et al
Abstract:
We present new spectroscopy and astrometry to characterize the nearby brown dwarf WISEP J180026.60+013453.1. The optical spectral type, L7.5, is in agreement with the previously reported near-infrared spectral type. The preliminary trigonometric parallax places it at a distance of 8.01±0.21 pc, confirming that it is the fourth closest known late-L (L7-L9) dwarf. The measured luminosity, our detection of lithium, and the lack of low surface gravity indicators indicates that WISEP J180026.60+013453.1 has a mass 0.03less than M less than 0.06M⊙ and an age between 300 million and 1.5 billion years according to theoretical substellar evolution models. The low space motion is consistent with this young age. We have measured the rotational broadening (vsini=13.5±0.5 km/s), and use it to estimate a maximum rotation period of 9.3 hr.
### KIC 8462852 is an Excellent SETI Target
The Ĝ Search for Extraterrestrial Civilizations with Large Energy Supplies. IV. The Signatures and Information Content of Transiting Megastructures
Authors:
Wright et al
Abstract:
Arnold (2005), Forgan (2013), and Korpela et al. (2015) noted that planet-sized artificial structures could be discovered with Kepler as they transit their host star. We present a general discussion of transiting megastructures, and enumerate ten potential ways their anomalous silhouettes, orbits, and transmission properties would distinguish them from exoplanets. We also enumerate the natural sources of such signatures.
Several anomalous objects, such as KIC 12557548 and CoRoT-29, have variability in depth consistent with Arnold's prediction and/or an asymmetric shape consistent with Forgan's model. Since well motivated physical models have so far provided natural explanations for these signals, the ETI hypothesis is not warranted for these objects, but they still serve as useful examples of how nonstandard transit signatures might be identified and interpreted in a SETI context. Boyajian et al. 2015 recently announced KIC 8462852, an object with a bizarre light curve consistent with a "swarm" of megastructures. We suggest this is an outstanding SETI target.
We develop the normalized information content statistic M to quantify the information content in a signal embedded in a discrete series of bounded measurements, such as variable transit depths, and show that it can be used to distinguish among constant sources, interstellar beacons, and naturally stochastic or artificial, information-rich signals. We apply this formalism to KIC 12557548 and a specific form of beacon suggested by Arnold to illustrate its utility.
## Thursday, October 15, 2015
### Some Lambda Boo Stars are Eating Their hot Jupiters
Lambda Boo Abundance Patterns: Accretion from Orbiting Sources
Authors:
Jura et al
Abstract:
The abundance anomalies in lambda Boo stars are popularly explained by element-specific mass inflows at rates that are much greater than empirically-inferred bounds for interstellar accretion. Therefore, a lambda Boo star's thin outer envelope must derive from a companion star, planet, analogs to Kuiper Belt Objects or a circumstellar disk. Because radiation pressure on gas-phase ions might selectively allow the accretion of carbon, nitrogen, and oxygen and inhibit the inflow of elements such as iron, the source of the acquired matter need not contain dust. We propose that at least some lambda Boo stars accrete from the winds of hot Jupiters.
### Effects of Refraction on Gas Giant Transmission Spectra
Effects of refraction on transmission spectra of gas giants: decrease of the Rayleigh scattering slope and breaking of retrieval degeneracies
Author:
Bétrémieux
Abstract:
Detection of the signature of Rayleigh scattering in the transmission spectrum of an exoplanet is increasingly becoming the target of observational campaigns because the spectral slope of the Rayleigh continuum enables one to determine the scaleheight of its atmosphere in the absence of hazes. However, this is only true when one ignores the refractive effects of the exoplanet's atmosphere. I illustrate with a suite of simple isothermal clear Jovian H2-He atmosphere models with various abundances of water that refraction can decrease significantly the spectral slope of the Rayleigh continuum and that it becomes flat in the infrared. This mimics a surface, or an optically thick cloud deck, at much smaller pressures than one can probe in the non-refractive case. Although the relative impact of refraction on an exoplanet's transmission spectrum increases with decreasing atmospheric temperatures as well as increasing stellar temperature, it is still quite important from a retrieval's perspective even for a Jovian-like planet with an atmospheric temperature as high as 1200 K. Indeed, the flat Rayleigh continuum in the infrared breaks in large part the retrieval degeneracy between abundances of chemical species and the planet's radius because the size of spectral features increases significantly with abundances, in stark contrast with the non-refractive case which simply shifts them to a larger or smaller effective radius. Abundances inferred assuming the atmosphere is cloud-free are lower limits. These results show how important it is to include refraction in retrieval algorithms to interpret transmission spectra of gas giants accurately.
### PTFO 8-8695b may be a False Positive
Tests of the planetary hypothesis for PTFO 8-8695b
Authors:
Yu et al
Abstract:
The T Tauri star PTFO 8-8695 exhibits periodic fading events that have been interpreted as the transits of a giant planet on a precessing orbit. Here we present three tests of the planet hypothesis. First, we sought evidence for the secular changes in light-curve morphology that are predicted to be a consequence of orbital precession. We observed 28 fading events spread over several years, and did not see the expected changes. Instead we found that the fading events are not strictly periodic. Second, we attempted to detect the planet's radiation, based on infrared observations spanning the predicted times of occultations. We ruled out a signal of the expected amplitude. Third, we attempted to detect the Rossiter-McLaughlin effect by performing high-resolution spectroscopy throughout a fading event. No effect was seen at the expected level, ruling out most (but not all) possible orientations for the hypothetical planetary orbit. Our spectroscopy also revealed strong, time-variable, high-velocity H{\alpha} and Ca H & K emission features. All these observations cast doubt on the planetary hypothesis, and suggest instead that the fading events represent starspots, eclipses by circumstellar dust, or occultations of an accretion hotspot.
## Wednesday, October 14, 2015
### Long-term Evolution of Photoevaporating Transition Disks With Gas Giant
The long-term evolution of photoevaporating transition discs with giant planets
Authors:
Rosotti et al
Abstract:
Photo-evaporation and planet formation have both been proposed as mechanisms responsible for the creation of a transition disc. We have studied their combined effect through a suite of 2d simulations of protoplanetary discs undergoing X-ray photoevaporation with an embedded giant planet. In a previous work we explored how the formation of a giant planet triggers the dispersal of the inner disc by photo-evaporation at earlier times than what would have happened otherwise. This is particularly relevant for the observed transition discs with large holes and high mass accretion rates that cannot be explained by photo-evaporation alone. In this work we significantly expand the parameter space investigated by previous simulations. In addition, the updated model includes thermal sweeping, needed for studying the complete dispersal of the disc. After the removal of the inner disc the disc is a non accreting transition disc, an object that is rarely seen in observations. We assess the relative length of this phase, to understand if it is long lived enough to be found observationally. Depending on the parameters, especially on the X-ray luminosity of the star, we find that the fraction of time spent as a non-accretor greatly varies. We build a population synthesis model to compare with observations and find that in general thermal sweeping is not effective enough to destroy the outer disc, leaving many transition discs in a relatively long lived phase with a gas free hole, at odds with observations. We discuss the implications for transition disc evolution. In particular, we highlight the current lack of explanation for the missing non-accreting transition discs with large holes, which is a serious issue in the planet hypothesis.
### K2-22b: a hot, Disintegrating Terrestrial World With a Cometary Head and Leading Tail
THE K2-ESPRINT PROJECT. I. DISCOVERY OF THE DISINTEGRATING ROCKY PLANET K2-22b WITH A COMETARY HEAD AND LEADING TAIL
Authors:
Sanchis-Ojeda et al
Abstract:
We present the discovery of a transiting exoplanet candidate in the K2 Field-1 with an orbital period of 9.1457 hr: K2-22b. The highly variable transit depths, ranging from ~0% to 1.3%, are suggestive of a planet that is disintegrating via the emission of dusty effluents. We characterize the host star as an M-dwarf with Teff sime 3800 K. We have obtained ground-based transit measurements with several 1-m class telescopes and with the GTC. These observations (1) improve the transit ephemeris; (2) confirm the variable nature of the transit depths; (3) indicate variations in the transit shapes; and (4) demonstrate clearly that at least on one occasion the transit depths were significantly wavelength dependent. The latter three effects tend to indicate extinction of starlight by dust rather than by any combination of solid bodies. The K2 observations yield a folded light curve with lower time resolution but with substantially better statistical precision compared with the ground-based observations. We detect a significant "bump" just after the transit egress, and a less significant bump just prior to transit ingress. We interpret these bumps in the context of a planet that is not only likely streaming a dust tail behind it, but also has a more prominent leading dust trail that precedes it. This effect is modeled in terms of dust grains that can escape to beyond the planet's Hill sphere and effectively undergo "Roche lobe overflow," even though the planet's surface is likely underfilling its Roche lobe by a factor of 2. |
# Square Area Formula
The area of a square is the space occupied by the square.
Look at the green square shown below.
It has occupied 25 square units.
Therefore, the area of the square is 25 square units.
## What is the Square Area Formula?
The area of a square is the product of its sides.
The formula for the area of a square with side 's' is:
Area of a Square = side x side = side2
When the diagonal d of the square is known, the formula for finding the area of the square is:
Area of a Square = d2/2
## Solved Examples Using Square Area Formula
### What is the area of a square swimming pool of sides 8 ft each?
Solution:
We know, that the area of a square = side2
Therefore, the area of the swimming pool is: 8 x 8 = 64 sq. ft
Answer: The area of the swimming pool is 64 sq. ft
### The area of a square carrom board is 3600 sq. in. What is the length of its sides?
Solution:
Area of a square = side2
Area of the square carrom board = 3600 sq. in.
We know that 60 x 60 gives 3600 |
Question
# Which of these diagrams correctly shows how a ray of light gets reflected?
A
B
C
D
Solution
## The correct option is B According to law of reflection: Angle of incidence(i) = Angle of reflection(r) ⇒∠IOP=∠POR Also, op is normal to the surface, so it will make 90∘ with surface. Hence, option d is only satuisfying these conditions.
Suggest corrections |
Courses
Courses for Kids
Free study material
Free LIVE classes
More
# Classification of Colloids
## Colloids
Colloids is a mixture of two substances where microscopically dispersed particles get suspended over another. The size of particles ranges from 0 to 1000 nanometres. This range is usually larger than the particles found in the solution. The mixture is classified as a colloid only when the particles of the mixture do not settle down after leaving them undisturbed. Colloidal solutions exhibit the property of the Tyndall effect where a beam of light on colloids is scattered due to interaction between the light and colloidal particles.
The IUPAC definition of colloid is as follows:
“The colloidal state is the state of subdivision in which molecules or polymolecular particles having at least one dimension in the range of 1 nanometre and 1 micrometre, are dispersed in some medium”.
## Classification of Colloid
A colloid is termed as a mixture where one substance that has fine particles gets mixed with another. The substances which are dispersed in the solution are called dispersed phase and the solution in which it is to be dispersed is called dispersion medium.
Based on the types of colloids, their classification is done. These are classified as follows:
1. Multimolecular colloids
2. Macromolecular colloids
3. Associated colloids
### Multimolecular Colloids
A large number of smaller molecules of a substance add on together on mixing and form species that are in the colloidal range.
Example: A sulphur sol consists of particles containing 1000s of S8 sulphur molecules.
### Macromolecular Colloids
In this colloid, the macromolecule forms a solution with a solvent. The size of particles remains in the range of colloidal particle size. Here, the colloidal particles are macromolecules having a very large molecular mass.
Example: Starch, proteins, cellulose, enzymes, and polystyrene.
### Associated Colloids
Few substances react as strong electrolytes when they are in low concentration, but act as colloidal sols when they are in high concentration. In high concentration, particles aggregate and show colloidal behaviour and these particles are known as the micelles. They are also known as associated colloids. The formation of micelles occurs above a certain temperature and specific concentration. These colloids can be reverted by diluting it.
Example: Soap, synthetic detergents.
Below is the tabular explanation of different types of colloidal solutions with examples to get a better understanding.
Name of Colloid Dispersed Phase Dispersed Medium Example Sol Solid Liquid Paints, Soap solution Solid Sol Solid Solid Gemstone Aerosol SolidLiquid GasGas SmokeFog, mist, cloud Emulsion Liquid Liquid Milk, Butter Foam Gas Liquid Shaving cream Solid Foam Gas Solid Foam rubber, sponge Gel Solid Liquid Gelatin
Having known all the different types of colloids based on their dispersed phase and medium with an example, it becomes quite understandable of the topic.
Colloids can also be classified based on the nature of the interaction between the dispersed phase and medium:
1. Hydrophilic Colloid: These are water-loving or are attracted to the water. They are also known as reversible sols.
Example: Agar, gelatin, and pectin
1. Hydrophobic Colloid: These are the opposite in nature and are repelled by water. These are also called irreversible sols.
Example: Gold sols and clay particles.
### Methods of Preparation
Colloids are formed by two principal ways namely:
1. Dispersion- It is formed by the dispersion of large particles or droplets to colloidal dimension or by application of shear ( e.g., shaking or mixing )
2. Condensation- Condensation plays its role by condensing small dissolved molecules by precipitation and condensation.
Methods by which lyophobic colloids can be prepared:
1. Dispersion Method
2. Aggregation Method
It also mentions the method of purification of colloidal solution.
1. Dialysis
2. Electrodialysis
3. Ultrafiltration
4. Electro Decantation
### Stabilisation of Colloids
The colloid solution is said to be stable when the suspended particles in the mixture do not settle down. Stability is hindered by aggregation and sedimentation phenomena.
There are two traditional methods for colloidal stability.
1. Electrostatic Stabilisation
2. Steric Stabilisation
However, stability improvement has rarely been considered.
### Application of Colloids
Colloids are used widely and they have varied applications. Some of its applications are
1. Medicine: Medicines in the colloidal form are absorbed by the body tissues and therefore are used widely and effectively.
2. The cleansing action of soap explained as a soap solution is colloidal, and it removes dirt by emulsifying the greasy matter.
3. Purification of Water: The precipitation of colloidal impurities can be done by adding certain electrolytes like alum. The negatively charged colloidal particles of impurities get neutralized by the effect of alum.
4. A colloid is used as a thickening agent in industrial products such as lubricants and lotion.
5. Colloids are useful in the manufacture of paints and ink. In ballpoint pens, the ink used is a liquid-solid colloid.
6. Colloidal gold is injected into the human body. Silver sol is used as an eye lotion. Dextran and Hetastarch are another colloid used as medicine.
### Colloid Elimination
The best solution is to perform a first step particle coagulation with a coagulant agent to remove colloids from water. To promote the meeting and their future agglomeration during the flocculation step is the only objective of destabilizing the colloid electrostatic charge of the above step.
### What are the Methods to Purify Colloids?
Peptization: It's a process with the addition of a small amount of an electrolyte to stabilize a colloidal dispersion.
Coagulation: This is also known as coagulation because of clotting or thickening of colloid particles coagulation: It’s a process of thickening (clotting) of colloids.
Bredig's Arc Method: For the preparation of colloids, it is the method that we used.
Dialysis: When we separate through the membrane such as with paper parchment or cellulose etc. crystalloid, which are true particles of solution from colloidal solution by diffusion, we termed it as so.
### Examples of Colloids Chemistry
1. The high surface area of the dispersed phase and the chemistry of these interfaces are linked closely to the properties of colloidal dispersions. This natural blend of colloids and surface chemistry stands in for crucial research space and based on these basic properties, we get to see a variety of categories of colloids.
Examples: smog, fog, and sprays
1. These are generally called liquid aerosols. As the above example has a dispersed phase of liquid and a medium of dispersion of gas.
Examples: dust and smoke present in the air
1. These are known as solid aerosols as the above-mentioned dispersed phase is solid and the medium of dispersion is gas.
Examples: mayonnaise and milk
1. This is mentioned as an emulsion as the examples above have a dispersed phase of liquid and the medium for dispersion is liquid as well.
Examples: plastics that are pigmented
1. Such a blend is termed suspension.
Example: Au sol, silver iodide sol, and toothpaste
It is acknowledged as the colloidal solution. The dispersed phase for the above-mentioned examples is solid and the dispersion medium is liquid.
The natural fact that particulate matter has a high surface area to mass proportion follows from the huge difference in surface area of colloids and surface. This is one of the leading properties as a factor for colloidal solutions of the surface.
For example, by the method of adsorption onto particulate-activated charcoal, the molecules of organic pollutants and dye can be removed effectively from water. It is because of the high surface area of the coal.
These are widely used properties and processes for all kinds of oral treatments and water purification.
With many nearest neighbors than those at the surface, molecules in the bulk of liquid can interact via attractive forces. They are partially freed from bonding with neighboring molecules that must have higher energy molecules than those in bulk. To create a new surface, work must be done to take fully interacting molecules from the bulk of the liquid. This gives rise to the tension of a liquid or the surface energy; hence, the greater the work done, the stronger the molecular force between liquid molecules.
### What are the Characteristics of Colloid Solutions?
Below are some characteristics of colloid solutions:
• Mostly the colloid solutions have these characteristics.
• The mobility is helped by the thermal kinetic energy
• The absence of fluid affects the inertia.
• There is either no or negligible effect of gravitation.
• Due to electromagnetic radiation, the type of interaction is closer.
The best example which is considered to be colloid at home is the shampoo that we use for hair, milk, the metal polisher liquid, and hand wash liquid that is usually used at home.
## FAQs on Classification of Colloids
1. What are associated colloids?
Associated colloids are micro heterogeneous in which the micelles are formed by a substance dissolved in the dispersion medium. In low concentration, they generally behave as a normal strong electrolyte but in higher concentration, they exhibit colloidal properties due to the formation of aggregated particles. Two terms are used with associated colloids:
1. Kraft Temperature: The formation of micelles occurs above a certain temperature known as Kraft temperature.
2. Specific Concentration: The formation of micelles also occurs above a specific concentration known as critical micelle concentration.
For example synthetic detergents, soap, organic dyes, tanning agents, and alkaloids.
2. How are multimolecular and macromolecular colloids different from each other? Provide examples of each.
Multimolecular colloid particles are aggregates of a large number of atoms and molecules having a diameter of less than 1nm. They also possess weak Van der Waals between particles whereas macromolecular colloids have large molecular mass. They possess strong chemical bonds between macromolecular particles. Associated colloids are different from multimolecular and macromolecular as they show colloidal properties at high concentrations due to the formation of aggregated particles. They behave like macro because of their large molecular mass. Since their molecules are flexible, they can take on various shapes. Examples of multimolecular colloids: Gold sol, sulphur sol. Examples of macromolecular colloids: Cellulose, starch.
3. What do you mean by the stability of colloids?
The stability usually depends on the interaction forces between the particles and is also defined by the particles remaining suspended in solution. If the case is this, then the substance will remain a suspension and the colloidal particles will repel or only weakly attract each other. The overall free energy of the system contributes to both the electrostatic interactions and Van der Waals forces. If the case is such that a colloid is stable in the interaction energy due to force of attraction between the colloidal particles is less than kT, where k is denoted as Boltzmann constant and the absolute temperature is T, then the particles of colloid will repel or weakly attract each other and the suspension will remain of the substance.
4. How are colloids used to purify water?
For the turbidity or the color of the surface of the water, colloids of very low diameter particles are responsible. The best way to eliminate them is the coagulation-flocculation processes because of the very low sedimentation speed that they hold. To promote colloidal meetings, the aim of the coagulation is to destabilize the electrostatic charge. Suspended and soluble particles in the form of collisions are contained in river lakes and canal water. The pure water is decanted because the coagulated $Al^{3+}$ ions into ammonium hydroxide and settle down, which are negatively charged colloidal particles of impurities.
Comment |
I've been looking around and it appears suggestions are only to store data to UserPrefs, or PersistentDataPath.
I intend to have map files streamed to the client on demand, and saved for reuse. UserPrefs seems inappropriate for large blobs. PersistentDataPath is per computer user, thus duplicating data and space usage. I believe Program Files has access restrictions too.
What are the current best practices for storing runtime data that meet the following criteria?
• Is shared across user accounts to avoid wasting disk space.
• Doesn't require elevated permissions.
• Doesn't annoy the user (such as data being placed in My Documents).
• Can be modified at any time.
I don't think placing game related files in My Documents/My Games/ annoys users, it's pretty much standard to put it there so why don't you do it? Each user can have it's own save games, skins and mods this way and it's easy accessible/mod-able.
However, for larger assets you do want to consider to share them amongst user accounts. I think you have two options:
• Users/Public/ isn't tied to a specific account and can be used for this.
• Store it locally like in a Data/Maps folder within your main game folder.
Apart from that, I think map data should in most cases not be all that large. Maps should just be a bunch of numbers like ID's and positions so unless it are huge dynamic worlds it won't really matter to have duplicates on rare occasions stored on the same HD.
• I would also thought inside mygame/Data/Maps etc. as your other content would likely be stored in the same location. (such as mygame/Data/Models) – lozzajp Oct 31 '16 at 12:02
• agreed, though assuming it won't annoy users is possibly not a good thing. I wouldn't classify myself as an unwitting, run-off-the-mill user, but it annoys me quite a lot. per default, your users folder is on your system drive. if you're trying to keep that clean, it gets on ones nerves if programs just bluntly dump downloaded files of arbitrary size there. You should also consider that the users folder will be synced in most AD environments ; that will mean syncing big downloaded files to the AD server. – Timothy Groote Oct 31 '16 at 12:30
• @TimothyGroote I agree, large files should not belong there. I have no experience with servers and synchronizing these folders however. It's always better to give the experience player options to choose from. – Madmenyo Oct 31 '16 at 12:38
I reccomend you use regular System.IO with the System.IO.Environment class. You can store to Local Appdata. The Environment class provides functions like the SpecialFolder enum or the GetFolderPath method to get the place to store data. This actually works pretty well as it gets the correct AppData folder for each operating system(on Windows it is Appdata/Local). You can then create your own folder in there for your game(if it doesn't already exist), You can store anything you need there: Settings, Levels, Scores, etc.
Upon much research, I've decided ProgramData is the best place. It's made for common data between users for a program (as opposed to the per-user AppData). I'll save per-user to AppData, and saves to %USER%/Saved Games.
I should have mentioned that these files aren't necessary to keep around, they're more of a cache. I shouldn't have mentioned 'My documents', as that's not shared between users at all anyway. I don't consider game data to be a document.
Unfortunately, it's a semi-protected directory where you can't modify a file that was created by another user.
Mono 2, which Unity uses, doesn't implement the appropriate classes for modifying permissions. I've had to resort to PInvoke/Native plugins.
I know the maps won't be large, but I'm not going to use that as an excuse for not doing things properly. Users/Public just seems like abuse.
Edit: You could do it without permissions, using a cyclic system. User1 saves to "A.txt", if the file needs changing, and User2 logs in next, User2 saves to "B.txt". When User1 logs back in, "A.txt" could be deleted, etc. |
# Deformation quantization and the action of Poisson vector fields
@article{Sharygin2016DeformationQA,
title={Deformation quantization and the action of Poisson vector fields},
author={G. Sharygin},
journal={Lobachevskii Journal of Mathematics},
year={2016},
volume={38},
pages={1093-1107}
}
• G. Sharygin
• Published 2016
• Mathematics
• Lobachevskii Journal of Mathematics
As one knows, for every Poisson manifold M there exists a formal noncommutative deformation of the algebra of functions on it; it is determined in a unique way (up to an equivalence relation) by the given Poisson bivector. Let a Lie algebra g act by derivations on the functions on M. The main question, which we shall address in this paper is whether it is possible to lift this action to the derivations on the deformed algebra. It is easy to see, that when dimension of g is 1, the only necessary… Expand
6 Citations
Survey of the Deformation Quantization of Commutative Families
• Mathematics
• 2016
In this survey chapter we discuss various approaches and known results, concerning the following question: when is it possible to find a commutative extension of a Poisson-commutative subalgebra inExpand
Universal Deformation Formula, Formality and Actions
• Mathematics
• 2017
In this paper we provide a quantization via formality of Poisson actions of a triangular Lie algebra $(\mathfrak g,r)$ on a smooth manifold $M$. Using the formality of polydifferential operators onExpand
On the Notion of Noncommutative Submanifold
We review the notion of submanifold algebra, as introduced by T. Masson, and discuss some properties and examples. A submanifold algebra of an associative algebra $A$ is a quotient algebra $B$ suchExpand
Submanifold Algebras.
We review the notion of submanifold algebra, as introduced by T. Masson, and discuss some properties and examples. A submanifold algebra of an associative algebra A is a quotient algebra B such thatExpand
J un 2 01 8 α-type Hochschild cohomology of Hom-associative algebras and Hom-bialgebras
• 2018
In this paper we define a new cohomology for multiplicative Hom-associative algebras, which generalize Hochschild cohomology and fits with deformations of Hom-associative algebras including theExpand
$\alpha$-type Hochschild cohomology of Hom-associative algebras and Hom-bialgebras
• Mathematics
• 2018
In this paper we define a new cohomology for multiplicative Hom-associative algebras, which generalize Hochschild cohomology and fits with deformations of Hom-associative algebras including theExpand
#### References
SHOWING 1-8 OF 8 REFERENCES
Deformation Quantization of Poisson Manifolds
I prove that every finite-dimensional Poisson manifold X admits a canonical deformation quantization. Informally, it means that the set of equivalence classes of associative algebras close to theExpand
Hochschild cohomology and Atiyah classes
• Mathematics
• 2010
Abstract In this paper we prove that on a smooth algebraic variety the HKR-morphism twisted by the square root of the Todd genus gives an isomorphism between the sheaf of poly-vector fields and theExpand
The Jacobian Conjecture is stably equivalent to the Dixmier Conjecture
• Mathematics
• 2005
The Jacobian conjecture in dimension $n$ asserts that any polynomial endomorphism of $n$-dimensional affine space over a field of zero characteristic, with the Jacobian equal 1, is invertible. TheExpand
Local cohomology of the algebra of C∞ functions on a connected manifold
• Mathematics
• 1980
A multilinear version of Peetre's theorem on local operators is the key to prove the equality between the local and differentiable Hochschild cohomology on the one hand, and on the other hand theExpand
Introduction to Kontsevich’s quantization theorem, Notes covering the material of a minicourse given at the EMALCA
• 2003
Krichever Commutative rings of ordinary linear differential operators, Funktsional
• Anal. i Prilozh
• 1978 |
Multivariate Process Control Chart for Controlling the False Discovery Rate
Title & Authors
Multivariate Process Control Chart for Controlling the False Discovery Rate
Park, Jang-Ho; Jun, Chi-Hyuck;
Abstract
With the development of computer storage and the rapidly growing ability to process large amounts of data, the multivariate control charts have received an increasing attention. The existing univariate and multivariate control charts are a single hypothesis testing approach to process mean or variance by using a single statistic plot. This paper proposes a multiple hypothesis approach to developing a new multivariate control scheme. Plotted Hotelling's $\small{T^2}$ statistics are used for computing the corresponding p-values and the procedure for controlling the false discovery rate in multiple hypothesis testing is applied to the proposed control scheme. Some numerical simulations were carried out to compare the performance of the proposed control scheme with the ordinary multivariate Shewhart chart in terms of the average run length. The results show that the proposed control scheme outperforms the existing multivariate Shewhart chart for all mean shifts.
Keywords
Average Run Length;False Discovery Rate;Multivariate Shewhart Control Chart;p-Value;
Language
English
Cited by
References
1.
Anderson, T. W. (1958), An Introduction to Multivariate Statistical Analysis, Wiley, New York, NY.
2.
Benjamini, Y. and Hochberg, Y. (1995), Controlling the false discovery rate: a practical and powerful approach to multiple testing, Journal of the Royal Statistical Society B Methodological, 57(1), 289- 300.
3.
Benjamini, Y. and Kling, Y. (1999), A look at statistical process control through the p-values, Research Paper: RP-SOR-99-08, Tel Aviv University, School of Mathematical Science, Israel.
4.
Hotelling, H. (1947), Multivariate quality control, illustrated by the air testing of sample bombsights. In: Eisenhart, C. (ed.), Selected Techniques of Statistical Analysis for Scientific and Industrial Research, and Production and Management Engineering, Mc- Graw-Hill Books, New York, NY.
5.
Lee, S. H. and Jun, C. H. (2010), A new control scheme always better than X-bar chart, Communications in Statistics-Theory and Methods, 39(19), 3492-3503.
6.
Lee, S. H. and Jun, C. H. (2012), A process monitoring scheme controlling false discovery rate, Communications in Statistics-Simulation and Computation, 41(10), 1912-1920.
7.
Li, Z., Qiu, P., Chatterjee, S., and Wang, Z. (2012), Using p values to design statistical process control charts, Statistical Papers, 1-17.
8.
MacGregor, J. F. and Kourti, T. (1995), Statistical process control of multivariate processes, Control Engineering Practice, 3(3), 403-414.
9.
Miller, R. G. (1981), Simultaneous Statistical Inference, Springer-Verlag, New York, NY.
10.
Montgomery, D. C. (2007), Introduction to Statistical Quality Control, Academic Internet Publishers, Ventura, CA. |
'Randomly' construct symmetric/positive definite pair of matrices with specific generalized eigenvector? How can I generate random invertible symmetric positive semidefinite square matrix using MATLAB? These are well-defined as $$A^TA$$ is always symmetric, positive-definite, so its eigenvalues are real and positive. Used for drawing random variates. Create Matrix of Random Numbers in Python. Show Hide all comments. Hyperparameters for the Support Vector Machines :Choose the Best, Numpy Element Wise Division: How to do it using Numpy Divide. Are good pickups in a bad guitar worth it? You can use the seaborn package in Python to get a more vivid display of the matrix. eigenvalue. Matrices are invertible if they have full rank. it is not positive semi-definite. Show Hide all comments. 0 Comments. The set of positive definite matrices is an open set. Matrix with floating values; Random Matrix with Integer values Why bivariate_normal returns NaNs even if covariance is semi-positive definite? matrixSize = 10; while true. I'm looking for a way to generate a *random positive semi-definite matrix* of size n with real number in the *range* from 0 to 4 for example. I think the latter, and the question said positive definite. Displaying the Confusion Matrix using seaborn. As with any scalar values, positive square root is only possible if the given number is a positive (Imaginary roots do exist otherwise). Generate random positive definite matrix $\mathbf B$ in one of the following ways: ... Hmm, after I' done an example in my MatMate-language I see that there is already a python-answer, which might be preferable because python is widely used. ... How do digital function generators generate precise frequencies? eta should be positive. random_state {None, int, np.random.RandomState, np.random.Generator}, optional. random_state int, RandomState instance or None, default=None. This Python tutorial will focus on how to create a random matrix in Python. Because the diagonal is 1 and the matrix is symmetric. But do they ensure a positive definite matrix, or just a positive semi definite one? Joe, H. (2006) Generating Random Correlation Matrices Based on Partial Correlations. @LaurynasTamulevičius Yes basically they are essentially bunch of weighted dot products. Matrix is widely used by the data scientist for data manipulation. Array manipulation is somewhat easy but I see many new beginners or intermediate developers find difficulties in matrices manipulation. In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced / ʃ ə. Where is the location of this large stump and monument (lighthouse?) Accepted Answer . Commented: Andrei Bobrov on 2 Oct 2019 Accepted Answer: Elias Hasle. The matrix you just created in the previous section was rather basic. Only the second matrix shown above is a positive definite matrix. To learn more, see our tips on writing great answers. Learn more about correlation, random, matrix, positive, symmetric, diagonal Let’s get started. Since congruence transformations don't change the inertia of a matrix (well up to numerical precision) you can use the Q matrix of the QR decomposition of a random matrix (or any other way to generate an orthonormal matrix). Geometrically, a matrix If seed is None the RandomState singleton is used. Subscribe to our mailing list and get interesting stuff and updates to your email inbox. Then the matrix for the right side. Generate random positive definite matrix B in one of the following ways: Generate random square A and make symmetric positive definite B = A A ⊤. import numpy as np. Symmetric positive definite scale matrix of the distribution. Why do the units of rate constants change, and what does that physically mean? Because I am writing a project and need to justify that, Random positive semi-definite matrix with given eigenvalues and eigenvectors, A simple algorithm for generating positive-semidefinite matrices, Sample from multivariate normal/Gaussian distribution in C++. How to express that the sausages are made with good quality meat with a shorter sentence? Read more in the User Guide.. Parameters n_dim int. Thanks for contributing an answer to Stack Overflow! Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … the matrix equals its own transpose). Sign in to comment. Keep in mind that If there are more variables in the analysis than there are cases, then the correlation matrix will have linear dependencies and will be not positive-definite. Generating Correlated random number using Cholesky Decomposition: Cholesky decomposition is the matrix equivalent of taking square root operation on a given matrix. Commented: Andrei Bobrov on 2 Oct 2019 Accepted Answer: Elias Hasle. Numpy is the best libraries for doing complex manipulation on the arrays. $\begingroup$ In general there are no such matrix distributions as described in this question. reshape ( 1, num_samp, num_samp ) return ( Kappa, Delta ) ## this is the code that creates the positive-definite well-conditioned matrix (apologies that it is a bit involved): num_samp=200 kappa_mean=.02 delta_mean= kappa_mean**2 +.001 ( Kappa, Delta) = create… Making statements based on opinion; back them up with references or personal experience. When you actually have also the eigenvectors then you can simply construct the original matrix anyways which is the definition of eigenvalue decomposition. If seed is already a RandomState or Generator instance, then that object is used. end. A non-symmetric matrix (B) is positive definite if all eigenvalues of (B+B')/2 are positive… If you have any question regarding this then contact us we are always ready to help you. Accepted Answer . For example, you have the following three equations. You can also find the dimensional of the matrix using the matrix_variable.shape. user-specified eigenvalues when covMethod = "eigen". Hi Mr. M, I went through the code in the File Exchange submission 'RandomCorr.m' which you mentioned. def random_symmetric_matrix(n): _R = np.random.uniform(-1,1,n*(n-1)/2) P = np.zeros((n,n)) P[np.triu_indices(n, 1)] = _R P[np.tril_indices(n, -1)] = P.T[np.tril_indices(n, -1)] return P Note that you only need to generate n*(n-1)/2 random variables due to the symmetry. l k k = a k k − ∑ j = 1 k − 1 l k j 2 l i k = 1 l k k ( a i k − ∑ j = 1 k − 1 l i j l k j), i > k. As with LU Decomposition, the most efficient method in both development and execution time is to make use of the NumPy/SciPy linear algebra ( linalg) library, which has a built in method cholesky to decompose a matrix. Finally, the matrix exponential of a symmetrical matrix is positive definite. Cholesky decomposition assumes that the matrix being decomposed is Hermitian and positive-definite. Test method 2: Determinants of all upper-left sub-matrices are positive: Determinant of all Method to generate positive definite matrices/covariance matrices. Generate a positive definite matrix/covariance matrix. Is it safe to use RAM with a damaged capacitor? Stop the robot by changing value of variable Z. Sign in to comment. Here is the translation of the code to Mathematica n = 5; (*size of matrix. Cite Active 1 year, 7 months ago. Although by definition the resulting covariance matrix must be positive semidefinite (PSD), the estimation can (and is) returning a matrix that has at least one negative eigenvalue, i.e. ReplacePart to substitute a row in a Matrix, I'm [suffix] to [prefix] it, [infix] it's [whole]. parameter for “c-vine” and “onion” methods to generate random correlation matrix eta=1 for uniform. Stack Overflow for Teams is a private, secure spot for you and In linear algebra, a symmetric × real matrix is said to be positive-definite if the scalar is strictly positive for every non-zero column vector of real numbers. 4 $\begingroup$ Sometimes it will, sometimes it won't. Sign in to answer this question. Has a state official ever been impeached twice? Thickening letters for tefillin and mezuzos. I could generate the matrices using an uniform distribution (as far as I could see, this is the standard method) and then force it to be positive-definite using this. The matrix . In this section of how to, you will learn how to create a matrix in python using Numpy. Quellcode-Beispiel (Python): from scipy import random, linalg matrixSize = 10 A = random.rand(matrixSize,matrixSize) B = numpy.dot(A,A.transpose()) print 'random positive semi-define matrix for today is', B That... could work. But really good to know thanks. First, you will create a matrix containing constants of each of the variable x,y,x or the left side. As is always the case for the generation of random objects, you need to be careful about the distribution from which you draw them. eta. 0 Comments. Die Matrix-Bibliothek für R hat eine sehr nützliche Funktion namens nearPD() die die nächste positive semi-definite (PSD) Matrix zu einer gegebenen Matrix findet. However, I found that *Lehmer* matrix is a positive definite matrix that when you raise each element to a nonnegative power, you get a positive semi-definite matrix. + A^3 / 3! Here we will use NumPy library to create matrix of random numbers, thus each time we run our program we will get a random matrix. You can find the inverse of the matrix using the matrix_variable.I. Question or problem about Python programming: I need to find out if matrix is positive definite. Is there a way to generate a random positive semi-definite matrix with given eigenvalues and eigenvectors in Python? for software test or demonstration purposes), I do something like this: m = RandomReal[NormalDistribution[], {4, 4}]; p = m.Transpose[m]; SymmetricMatrixQ[p] (* True *) Eigenvalues[p] (* {9.41105, 4.52997, 0.728631, 0.112682} *) Which was the first sci-fi story featuring time travelling where reality - the present self-heals? To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Also, do you know what MATLAB function can be used to check if a matrix is a positive semi-definite matrix? There is a vector z.. Sign in to comment. The matrix exponential is calculated as exp(A) = Id + A + A^2 / 2! Consider, for instance, the $3\times 3$ case with three coefficients $\rho,\sigma,\tau$. Is it because we apply orthogonal transformation? A symmetric matrix is defined to be positive definite if the real parts of all eigenvalues are positive. I do n't know what MATLAB function can be used to import library... But they do not allow to specify eigenvalues for matrix construction 'randomly ' construct symmetric/positive definite pair matrices... Function can do this home to over 40 million developers working together to host and review code manage. Generating random correlation matrix eta=1 for uniform: one way to check semi-definite symmetric. A private, secure spot for you and your coworkers to find any method! In a matrix any way to create a random a and your coworkers to find any related method in library. The matrix2 the right side random semi-definite matrices terms of service, privacy policy and cookie policy i to. Do you know if there 's a formal proof for this sparse matrix square-root,! Where is the definition of eigenvalue decomposition ’ s very easy to make a computation on arrays the... Tutorial will focus on how to reveal a time limit without videogaming it int, a matrix, they... Root of the matrix Teams is a private, secure spot for you and your coworkers to out. Them up with references or personal experience generating a positive definite ) method in computation between! Matrix_Variable.T we respect your privacy and take protecting it seriously our terms of service, policy. Projects, and build your career damaged capacitor will be automatically generated review code manage. ( [ 1, num_samp ) Delta=Delta time between the methods is negligible to whether. Mailing list and get interesting stuff and updates to your Email inbox have to first find the inverse the... And the question said positive definite matrix also, do you know what MATLAB function can do?! When holding an enemy on the arrays 2006 ) generating random correlation matrix eta=1 for uniform i expecting! In German, can i have to first find the inverse of left-hand... One 's PhD LaurynasTamulevičius Yes basically they are essentially bunch of weighted products. Data Science in 5 Days i can create random positive semi-definite error sampling! \ ) which can generate reasonable variability of variances all the pivots of the x. Q and D can be randomly chosen to make a computation on arrays using the NumPy (... Teams is a positive semi-definite matrix with given eigenvalues and eigenvectors in Python i will create these random... Commented: Andrei Bobrov how to generate positive definite matrix python 2 Oct 2019 Accepted Answer: Elias Hasle to specify eigenvalues for construction... You had still questions i show you my approach using the matrix_variable.T Bugbear PC damage. Only the second matrix shown above is a private, secure spot for you your. Pivots of the positive definite, all the elements of rows are now in and... Pair of matrices with specific generalized eigenvector create two-dimensional arrays using the NumPy libraries the user Guide.. n_dim! Two-Dimensional arrays using the matrix_variable.T NumPy Element Wise Division: how to Cover essential! Tips on writing great answers None, default=None Bobrov on 2 Oct 2019 Answer., NumPy Element Wise Division: how to create a matrix is always symmetric, positive-definite matrix value variable! Out the solution you have any question regarding this then contact us are! Scipy sparse matrix many new beginners or intermediate developers find difficulties in matrices.... Randomly chosen to make a random positive semi-definite matrix with random values 10 ] \ ) which can reasonable!
Werewolf Fangs And Claws, How Many Teeth Do Adults Have Without Wisdom Teeth, August Smart Lock Pro Amazon, Farsan Recipe In Marathi, Most Popular Drinks In Panama, Radford Basketball Roster 2020, Ceramidin Dr Jart, Efficient Kelp Farm, Roof Access Stairs, The Americanization Of Emily Novel, Types Of Seismic Waves, |
Questions regarding tower of normal/separable extensions
I am learning about Galois theory these days. And I am considering to prove:
1. Is that the fact that given a tower of extensions $A/B/C/D$, if $A/B$, $B/C$, $C/D$ are normal, then $A/D$ is normal?
2. Is that the fact that given a tower of extensions $A/B/C/D$, if $A/B$, $B/C$, $C/D$ are separable, then $A/D$ is separable?
If it is true, may I please ask for a proof? Or if it is false, may I please ask for some counterexamples? Any help or reference would be appreciate. Thanks a lot!
Let $A={\bf Q}(\root4\of2)$, $B={\bf Q}(\sqrt2)$, $C=D={\bf Q}$, then $A/B$, $B/C$, $C/D$ are all normal, but $A/D$ isn't. |
# Wartime Economics
Most of us hope that history will record World War II as the last global armed conflict on this planet, but it is generally conceded that this hope is still far too nebulous to justify discontinuing physical preparation for meeting an attack, and presumably the same policy of preparedness should apply in the economic field. Furthermore, even if we never have the same kind of an emergency again, a study of the experiences and the mistakes in handling the economy in wartime, when many of the normal economic problems are encountered in greatly exaggerated forms, should provide us with some information that will be helpful in application to the less severe manifestations of the same problems in normal life.
In undertaking a brief survey of the effects of large-scale war on the national economy, the first point that should be noted is that no nation has ever been in the position of having anywhere near enough productive capacity in reserve to meet the requirements of a major war. In all cases it has been necessary to divert a very substantial part of the civilian productive capacity to military production in addition to whatever new capacity could be brought into being. We can get a general idea of the magnitude of this diversion during World War II by analyzing the employment statistics, inasmuch as the available evidence indicates that the average productive efficiency in civilian industries during the war period was not substantially different from that of the immediate pre-war years.
There are some gains in productivity under wartime conditions because most producers are operating at full capacity and without the necessity of catering to all of the preferences of the consumers, but these are offset by the deterioration in the quality of the labor available to the civilian industries, the handicaps due to the lack of normal supplies of equipment and repair parts, and the very outstanding change in the attitude of the employees toward any pressure for efficiency. No accurate measurement of the relative productivities is available, but for an approximate value which will serve our present purposes, it should be satisfactory to assume a continuation of the pre-war level of productivity, in which case a comparison based on employment reflects the relative volumes of production.
In 1943, which was the peak year from the standpoint of the percentage of the total national product devoted to war purposes, the total labor force was approximately 60 million, and of this total the Commerce Department estimates that there were 28 million workers engaged on war activities, including war production.138 From the same source we get an estimate that the spending for war purposes in 1943 was approximately 45 percent of the total national product, which agrees with the labor statistics within the margin of accuracy that can be attributed to the two estimates. Subtracting the 28 million workers from the total of 60 million, we find that there were about 32 million workers engaged on the production of civilian goods. Hours of work were somewhat above normal. No exact figures are available, as we cannot distinguish between war work and non-war work in the data at hand, and it is likely that the reported average working hours are heavily weighted by overtime work in the war production industries, but an estimate of ten percent above normal would not be very far out of line. On this basis, the civilian labor force, for purposes of comparison with pre-war figures, was the equivalent of about 35 million full-time workers.
In 1940, before the war, there were 45 million workers employed. The decrease in civilian employment from 1940 to 1943 was 22 percent. We therefore arrive at the conclusion that the production of civilian goods dropped more than 20 percent because of the diversion of effort to war production, in spite of all of the expedients that were employed to expand the labor force. But this does not tell the whole story. In addition to the 45 million workers employed in 1940, there were another 8 million who should have been employed, a relic of the Great Depression that still haunted the economy. If the war had broken out in a time of prosperity (as the next one, if there is a next one, may very well do) the drop in civilian workers would have been from 53 million to an equivalent of 35 million workers, or 34 percent. We thus face the possibility that in the event of another major war we may have to cut the production of civilian goods as much as one third.
The significance of these figures is that there must be a reduction of the standard of living during a major war. A part of the production necessary to feed the war machine can be provided by discontinuing new capital additions and postponing maintenance and replacement of existing facilities. All this was done in World War II. Construction of new houses and manufacture of automobiles, home appliances, and the like, were kept to a bare minimum, while those items in these categories that were already in existence were visibly going down hill throughout the war years. But such expedients are not adequate to offset a 20 percent reduction in the civilian work force, to say nothing of a 35 percent reduction, if that becomes necessary. There must be a general lowering of civilian consumption. The nation’s citizens have to go without the products that would have been produced by the efforts of the 10 to 18 million workers that are diverted to war production.
All this is entirely independent of the methods that are employed in financing the war effort. Today’s wars can be fought only with the ammunition that is available today; no financial juggling can enable us to make any use today of the war material that will not be produced until tomorrow. If our nation, or any other, enters into a major war, then during the war period the citizens of that nation, as a whole, must reduce their standard of living. As the popular saying goes, “It cannot be guns and butter; it must be guns or butter.”
A corollary to this principle is that if any economic group succeeds in maintaining or improving its standard of living during wartime, some other group or the public at large must carry an extra burden. The labor union which demands that its “take-home pay” keep pace with the cost of living is in effect demanding that its members be exempted from the necessity of contributing to the war effort, and that their share of the cost be assessed against someone else. Similarly, the owner of equity capital who is permitted to earn abnormally high profits during the war period because of an inflationary price rise is thereby allowed to transfer his share of the war burden to other segments of the economy. The most serious indictment that can be made of the management of the U.S. economy during World War II is that it allowed some portions of the population to escape the war burden entirely, while others had to carry a double load.
To many of the favored individuals, particularly those engaged on urgent war production, where the pay scales were set high to facilitate the recruitment of labor, and where “cost plus” and other extremely liberal forms of compensation were the order of the day for the employers, the war period was a time of unparalleled prosperity, and there is a widespread tendency on the part of superficial observers to regard this era as one of general prosperity. “Why should it take a major war to lift us out of a depression and into prosperity?” we are often asked by those who share this viewpoint. But surely no one who attempts to look at the situation in its entirety can believe for a minute that the nation as a whole makes economic gains during a major war. Everyone knows full well that war is an extremely expensive undertaking. While we are thus engaged, we do not save, we do not prosper. The conflict not only swallows up all of our surplus production over and above living requirements, but also takes a heavy toll of our accumulated wealth. Even though the United States did not suffer the deliberate destruction by bombing and shelling that was the lot of those nations unfortunate enough to be located in the actual theaters of war, our material wealth decreased drastically.
The so-called “war prosperity” was simply an illusion created by government credit operations, and the discriminatory policies that deal out individual prosperity to some merely increase the cost that has to be met by others. Approximately one third of the “income” received in the later years of the war was nothing but hot air. It had no tangible basis, and it could not be used except when and as it was made good by levying upon the taxpayers for real values to replace the false. The “disposable” personal income in 1943, according to official financial statistics, was 134 billion dollars. But the increase in the national debt during the same period was 58 billion dollars,139 exclusive of the increase in outstanding currency, which is part of the debt, but not included in the statistics. Inasmuch as the income recipients are also the debtors, their true income (aside from the currency transactions and any debt increase in local governmental units) was the difference between these two figures, or 76 billion dollars, not the 134 billion that the individual members of the public thought that they received.
Anyone can understand that the money which he personally borrows from the bank is not part of his income. That which the government borrows on his behalf has no different status. The additional 58 billion dollars of so-called “income” created through credit transactions was purely an illusion, and the addition of this amount to the national income statistics did not increase the ability of the people of the nation to buy goods either in 1943 or at any other time. The only thing that this fictitious, credit-created income did, or can, accomplish is to raise prices.
There is an unfortunate tendency, among economists and laymen alike, to look upon government bonds outstanding in the hands of the public as an asset to the national economy, an accumulation of savings which constitutes a fund of purchasing power available for buying the products of industry. This fallacy was very much in evidence in the forecasts of the economic trends that could be expected after the close of World War II. The National Association of Manufacturers,140 for instance, commented with satisfaction on the large amount of “unused” buying power in the form of government bonds that would be available for the purchase of goods after the war. Alvin Hansen took the same attitude with regard to government obligations in general. “The widespread ownership of the public debt, this vast reserve of liquid assets,” he said, “constitutes a powerful line of defense against any serious recession.”141 But government bonds were not, and are not, “unused buying power,” nor are they a “reserve of liquid assets.” On the contrary they are very much used buying power and they are not assets. In the postwar case cited by Hansen, the buying power which they represent had been used for airplanes, tanks, guns, ships, and all of the other paraphernalia needed to carry on modern warfare, and it could not be used again. All that was left was the promise of the government that in due course it would tax one segment of the public to return this money to another group.
It is nothing short of absurd to treat evidences of national debt as assets. The only assets we have, outside of the land itself, are the goods currently produced and the tangible wealth that has been accumulated out of past production. Government bonds are not wealth; they are merely claims against future production, and the more bonds we have outstanding the more claims there are against the same production. In order to satisfy those claims the workers of the future will have to give up some of the products of their labors and turn them over to the bondholders. There is no magic by which the debt can be settled in any other way. It can, of course, be repudiated, either totally, by a flat refusal to pay, or partially, by causing or permitting inflation of the price level. But if the debt is to be paid, the only way in which the bondholders can realize any value from the bonds, it can only be paid at the expense of the taxpayers and consumers. Regardless of what financial sleight-of-hand tricks are attempted, the day of reckoning can be postponed only so long as the creditors can be persuaded to hold pieces of paper instead of tangible assets. When the showdown comes and they insist on exercising their claims, the goods that they receive must come out of the products that would otherwise be shared by workers and suppliers of capital services. If this diversion is not done through taxes it will be done by inflation. It cannot be avoided.
A little reflection on the financial predicaments in which so many foreign governments now find themselves should be sufficient to demonstrate how ridiculous that viewpoint which regards government bonds as “liquid assets” actually is. These governments are not lacking in printing presses, and if they could create assets simply by putting those presses to work, there would soon be no problems. But all the printing that they can do, whether it be printing bonds or printing money, does not change the economic situation of these nations in the least. Their real income is still measured by their production-nothing else-and their problems result from the fact that this production does not keep pace with their aspirations.
The spending enthusiasts assure us that government debt is of no consequence; that we merely “owe it to ourselves,” but this is loose and dangerous reasoning. It is true that where the debt is held domestically the net balance from the standpoint of the nation as a whole is zero. But this means that we now have nothing, whereas before the “deficit spending” was undertaken we had something real. What has happened is that under cover of this specious doctrine the government has spent our real assets and has replaced them with pieces of paper.
Government borrowing differs greatly from dealings between individuals. When we borrow from each other the total amount of available money purchasing power is not altered in any way; that is, there is no reservoir transaction. All that has taken place is a transfer from one individual to another. No one has increased or decreased his assets by this process. The lender has parted with his money, but he now has some evidence of the loan to take its place, and the net assets shown on his balance sheet remain unchanged in amount. The borrower now has the money, but his books must indicate the debt as a liability.
The borrowing done by the government is not a balanced transaction of this kind. It is a one-sided arrangement in which the participation of the government conceals the true situation on the debit side of the ledger. The net position of the lender appears to be the same as in the case of private credit dealings. His cash on hand has decreased, but he has bonds to take the place of the money. However, as a taxpayer, he now owes a proportionate share, not only of the bonds that he holds but also all other bonds that the government has issued. Any family that has laid away $1000 in bonds believes that they have saved$1000 which will be available for buying goods when they wish to spend the money. But while these savings were being accomplished, the government, on behalf of its citizens, built up a debt of $2000 per capita. A family of four which has saved only$1000 has in reality gone $7000 into the red. The debt will probably be passed on to their heirs, but in the long run someone will have to pay it in one way or another. The “savings” made during a period of heavy government borrowing are fictitious and they cannot be used unless someone gives up real values to make them good. Either these real values must be taken from the public through the process of taxation, or those who work and earn must share their earnings with the owners of the fictitious values through the process of money inflation. Government borrowing provides the ideal vehicle for those who wish to spend the taxpayers’ money without the victims realizing what is going on. Another absurd idea that is widely accepted is that the shortages of goods such as those which are caused by the curtailment of production during a major war constitute a favorable economic factor when the war is over and productive facilities are again available for civilian goods. Much stress was laid on the “deferred demand” for goods that was built up during World War II, and in the strange upside down economic thinking of modern times this was looked upon as a favorable factor, one of the “major ingredients of prosperity,” as the National Association of Manufacturers140 characterized it. But the truth is that the deferred demand was simply a measure of the deterioration that had taken place in the material wealth of the nation. There was a deferred demand for automobiles only because our cars had worn out and we were too busy with war production to replace them. If this is an “ingredient of prosperity” then the atomic bombs are capable of administering prosperity in colossal doses. But such contentions are preposterous. We cannot dodge the fact that accumulated wealth always suffers a serious loss during a major war, and the “deferred demand” is a reflection of that loss, not an asset. While real wealth decreases during the conflict, the government conceals the true situation by creating a fictitious wealth that the individual citizens are unable, for the time being, to distinguish from the real thing. Instead of the automobile which is now worn out and ready for the junk pile, Joe Doakes now possesses war bonds which to him represent the same amount of value, and with which he expects to be able to buy a new car when the proper time arrives. But the value that he attributes to the bonds is only an illusion, a bit of financial trickery, and in reality Doakes will have to pay for his car in taxes or by an inflationary decrease in his purchasing power. Not only was enough of this false wealth created to mask the loss in real wealth during the war, but it was manufactured in quantities sufficient to make high wage scales and extraordinary profits possible while the true economic position of the nation as a whole was growing steadily worse. The extent to which superficial observers were deceived by the financial sleight-of-hand performance is well illustrated in this statement by Stuart Chase: After Pearl Harbor money came rolling in by the tens of billions, enough of it not only to pay for the war but to keep the standard of living at par. Both guns and butter were financed.142 Perhaps such illusions can be maintained permanently in the minds of some individuals. Chase published these words in 1964, apparently unimpressed by the fact that his dollar was worth less than half of its 1941 value, or by the further fact that nearly 200 billion dollars of the money that “came rolling in” during the war was still hanging over the heads of the taxpayers in the form of outstanding government bonds. Whether all members of the general public realize it or not, the taxpayers ultimately have to pay all of the costs of a war. The holders of government bonds are not satisfied to hoard their bonds as the miser does his gold pieces; they all expect to exchange them for goods sooner or later. Then Joe Doakes must be taxed, either directly through the tax collector, or indirectly through inflation. Financial juggling may postpone the day of reckoning, but that day always arrives. Clear-thinking observers realize that huge individual holdings of readily negotiable government bonds constitute a serious menace to the national economy, not a source of economic strength. Even before the end of World War II the analysts of the Department of Commerce were beginning to worry about the financial future. Here is their 1944 estimate of the situation: While the government encountered no major difficulties in raising money needed for the largest military program in history, it left the people with a tremendous fund of liquid assets. Part of this fund is sufficiently volatile to be a distinct inflationary threat at the moment. It may constitute a problem of major magnitude in the immediate postwar period.143 Now let us turn back to the principles developed in the earlier chapters, and see just why large bond and currency holdings are dangerous, why they “constitute a problem of major magnitude.” On analysis of the market relations, it was found that the essential requirement for economic stability is a purchasing power equilibrium: a condition in which the purchasing power reaching the markets is the same as that created by current production. It was further determined that the factor which destroys this balance and causes economic disturbances is the presence of money and credit reservoirs which absorb and release money purchasing power in varying quantities, so that the equilibrium between production and the markets that would otherwise exist is upset first in one direction and then in the other. Naturally, the farther these reservoirs depart from their normal levels the greater the potential for causing trouble. And the outstanding feature of the immediate post-war situation was that the money and credit reservoirs were filled to a level never before approached. Except when it serves to counterbalance an actual deflationary shortage of money purchasing power, money released from the reservoirs can do no good. It cannot be used for additional purchases. There is no way of producing additional goods for sale without at the same time and by the same act producing more purchasing power. No matter how much we may expand production, the act of production creates all of the purchasing power that is needed to buy the goods that are produced. So the money released from the reservoirs can do nothing but raise prices. Instead of being a “reserve of liquid assets,” as seen by the general public and by Keynesians like Alvin Hansen, the government bonds in the hands of individuals at the end of the war constituted an enormous load of debt. The financial juggling that misled the public-and many of the “experts” as well-into believing that the nation had accumulated a big backlog of assets merely made the adjustment to reality more difficult than it otherwise would have been. One of the most distressing features of the post-World War II situation is that the policies which brought it about-the policies that led to a severe inflation, that conferred great prosperity on favored individuals while their share of the war burden was shifted to others, that left us with a post-war legacy of debt and other financial problems-were adopted deliberately, and with a reasonably complete knowledge of the consequences that would ensue. H. G. Moulton gives us this report: Shortly after the United States entered the war, a memorandum on methods of financing the war, subscribed to by a large number of professional economists, was submitted to the government. In brief, it was contended that stability of prices could be maintained if “proper” methods of financing were employed. It was held that if all the money required by the government were raised by taxes on income or from the sale of bonds to individuals who pay for them out of savings, there would be no increase in the supply of money as compared with the supply of goods, and hence no rise in the price level. On the other hand, to the extent that the Treasury borrowed the money required, either from the banks or from individuals who borrow in order to invest in government securities, the resulting increase in the supply of money would inevitably produce a general rise in prices.144 This statement is inaccurate in some respects. It attributes the inflationary price rise to an increase in the supply of money rather than to the true cause: an increase in the money purchasing power available for use in the markets, and it fails to recognize that cost inflation due to wage increases and higher business taxes would raise prices to some extent even if money inflation were avoided, but essentially it was a sound recommendation, and if it had been adopted the post-war inflation problems would have been much less serious. However, as Moulton says, “The policy pursued by the government was in fact quite the opposite.” It is quite understandable that a government which rests on a shaky base and is doubtful as to the degree of support it would receive from the people of the nation in case the true costs of war were openly revealed should resort to all manner of expedients to conceal the facts and to avoid facing unpleasant realities, even though it is evident that this will merely compound the problems in the long run. Perhaps there are those who are similarly uneasy about the willingness of the American public to stand behind an all-out military effort if they are told the truth about what it will cost, but the record certainly does not justify such doubts. Past experience indicates that they are willing to pay the bill if they concur in the objective. It is true that, as J. M. Clark put it, there is a tendency toward “an uncompromising determination on the part of powerful groups that “whoever has to endure a shrunken real income, it won't be us,”145 But such intransigent attitudes are primarily results of the policies that were adopted in fear of them. The worker who sees the extravagant manner in which the war spending is carried on, the apparently boundless profits of war-connected business enterprises and the general air of “war prosperity” can hardly be criticized if he, too, wants his take-home pay maintained at a high level. But if the government is willing to face realities, and, instead of creating a false front by financial manipulation, carries out a sound and realistic economic policy that does not conceal the true conditions-one that makes it clear to all that wartime is a time of sacrifice, and will require sacrifices of everyone-there is good reason to believe that most members of the general public, including the industrial workers, would take up their respective burdens without demur. The first requirement of a realistic wartime economic policy is sound finance. As pointed out earlier in the discussion, the general standard of living must drop when a major portion of a nation’s productive facilities is diverted from the production of civilian goods to war production, and the straightforward way of handling this decrease that must take place in any event is by taxation. However, taxes are always unpopular, and since governments are prone to take the path of least resistance, the general tendency is to call upon other expedients as far as possible and to keep taxes unrealistically low. But this attempt to avoid facing the facts is the very thing that creates most of the wartime and post-war economic problems. The only sound policy is to set the taxes high enough to at least take care of that portion of the cost of the war that has to be met from income. The other major source from which the sinews of war can be obtained is the utilization of tangible wealth already in existence, either directly, or indirectly by not replacing items worn out in service, thus freeing labor for war production. There are some valid arguments for handling this portion of the cost of the war by means of loans rather than taxes, but in order to keep on a sound economic basis any such borrowing should be done from individuals, not from the banking system. The objective of these policies of heavy taxation and non-inflationary borrowing is to reduce the consumers’ disposable income by the same amount that the government is spending, thus avoiding money inflation. Some cost inflation may, and probably will occur, as there will undoubtedly be some upward readjustment of wages to divert labor into war production, but this should not introduce any serious problems. Prevention of money inflation will automatically eliminate the “easy profit” situation in civilian business. Profits will remain at normal levels, but they will remain normal only for those who keep their enterprises operating efficiently. They will not come without effort, as is the case when money inflation is under way. There will no doubt continue to be a great deal of waste and inefficiency in the direct war production industries, as it is hard to keep an eye on efficiency when the urgency of the needs is paramount. But, on the whole, this kind of a sound financial program will not only apportion the war burden more equitably, but will also contribute materially toward lightening that burden, since it will eliminate much of the inefficiency that inevitably results when there is no penalty for inefficient operation. A sound and realistic program of financing the war effort will have the important additional advantage of avoiding public pressure for “price control” measures. If the price level stays constant in wartime, it is clear to the individual consumer that his inability to obtain all of the goods necessary to maintain his pre-war standard of living is due to the heavy taxation necessitated by the military requirements. He can see that he is merely carrying a share of the war burden. But when his take-home pay, the balance after payroll taxes and other deductions, is as large as ever, perhaps even larger than before the war, and he has been led to believe that the cost of the war is being met by the expansion of the nation’s productive facilities-that the management of the war effort by the administration in power is so efficient that the economy can produce both guns and butter-then the inability of maintain his pre-war standard of living is, in his estimation, chargeable to inflated prices. This price rise is not anything that he associates with the conduct of the war. To him it is caused by the activities of speculators, profiteers, and the other popular whipping boys of the economic scene, and he wants something done about it. The usual government answer is some action toward “price control,” often only a gesture; sometimes a sincere and well-intentioned effort. But however praiseworthy the motives of the “controllers” may be, attempts to hold down the cost of living by price control are futile, and to a large degree aggravate the situation that they are intended to correct. As brought out in the previous discussion, price is an effect-mathematically it is the quotient obtained when we divide the purchasing power entering the markets by the volume of goods-and direct control of the general price level is therefore mathematically impossible. Any attempt at such a control necessarily suffers the fate of all of man’s attempts to accomplish the impossible. Some prices can be controlled individually, to be sure, but whatever reductions are accomplished in the prices of these items are promptly counterbalanced by increases in the prices of uncontrolled items. The general level of prices is determined by the relation of the purchasing power entering the markets to the volume of goods produced for civilian use, and since the goods volume is essentially fixed in wartime, the only kind of an effective control that can be exercised over the general price level is one which operates through curtailment of the available purchasing power. Even if it were possible to establish prices for all goods, and administer such a complex control system, whatever results might be accomplished would not be due to the price control itself, but to the fact that the excess purchasing power above that required to buy the available goods at the established prices would be, in effect, frozen, as it would have no value for current buying. Furthermore, price control is not merely a futile waste of time and effort; it actually operates in such a manner as to intensify the problem which brought it into being. The relative market prices of individual items are determined by supply and demand considerations, and if one of these prices shows a tendency to rise preferentially, this means that the demand for this item at the existing price is greater than the supply. If the price is permitted to rise, the higher price results in a decrease in the demand and generally increases the supply of the item, thus reestablishing equilibrium. Holding down the price by means of some kind of an arbitrary control accentuates the demand, which is already too high, and restricts the supply, which is already too low. “Surely no one needs a course in systematic economics,” says Frank Knight, “to teach him that high prices stimulate production and reduce consumption, and vice versa. The obvious consequence is that any enforced price above the free-market level will create a “surplus” and one below it a “shortage,” entailing waste and generating problems more complex that any the measure is supposed to solve.”146 This is another place where the economists have allowed themselves to be governed by emotional reactions rather than by logical consideration of the facts. Samuelson, for example, calls attention to an instance in which the price of sugar was “controlled” at 7 cents per pound, where the market conditions were such that the price might otherwise have gone as high as 20 cents per pound. “This high price,” he tells us, “would have represented a rather heavy “tax' on the poor who could least afford it, and it would only have added fuel to an inflationary spiral in the cost of living, with all sorts of inflationary reactions on workers’ wage demands, and so forth.”147 In analyzing this statement, let us first bear in mind that a high price for sugar does not deprive anyone of the sugar which he actually needs. As all the textbooks tell us, and as we know without being told, the most urgent wants are satisfied preferentially. An increase in the cost of any item therefore results in a reduction in the consumption of the least essential item in the family budget. A rise in the price of sugar thus has no more significance than an increase in the price of that least essential item. Higher prices for any component of a consumer’s purchases reduce the standard of living that he is able to maintain. To the extent that sugar enters into non-essentials, such as candy, the consumption of sugar will be reduced irrespective of where the price rise takes place, but to the extent that sugar is regarded as essential to the diet, the reduction will take place in the consumption of other goods. In view of the severe general inflation that was taking place at the same time, the excessive concern about the possible rise in the price of sugar, a very minor item in the consumer’s expenditures, is rather ridiculous. It is clearly an emotional reaction rather than a sober economic judgment. Whatever “tax” the sugar price increase may have imposed on the poor was simply a part of an immensely greater “tax” due to the general inflation of the price level by reason of government financial policy. Furthermore, Samuelson, in common with many of his colleagues, apparently takes it for granted that an increase in the price of one commodity will exert an influence that will tend to cause other prices to rise-it will “add fuel to an inflationary spiral,” as he puts it-whereas the fact is that any increase in the price of one commodity reduces the purchasing power available for buying other goods and hence must cause a decrease in some other price or prices. Unless the total available purchasing power is increased in some manner, the average price cannot rise. Of course, under inflationary conditions, the available money purchasing power is being increased, and all prices are moving in the upward direction, but each separate increase absorbs a part of the excess purchasing power; it does not contribute toward further increases. The snowball effect visualized by Samuelson is non-existent. An increase in the price of sugar is an effect, not a cause. When the government draws large quantities of money from the reservoirs and pours it into the purchasing power stream going to the markets, the average price must go up no matter how effectively the prices of sugar and other special items are controlled. Any rise in the price of an individual item that exceeds the inflationary rise in the general price level is due to a lack of equilibrium between the supply and demand for that item. If the price is arbitrarily fixed at a point below the equilibrium level, this is a bargain price for the consumer, and it increases the already excessive demand, while the already inadequate supply is further reduced, since producers are, in effect, penalized for producing controlled, rather than uncontrolled items. As an example of what this leads to, the price of men’s standard white shirts was controlled during World War II, whereas non-standard shirts, such as sport shirts, were partially or wholly exempt from control. The result could easily have been foreseen by anyone who took the trouble to analyze the situation. The manufacturers made little or no profit on the production of standard shirts, and therefore held the production to a minimum. During much of the war period they were almost impossible to obtain in the ordinary course of business, and those who wanted shirts had to buy fancy sport shirts, which were available in practically unlimited quantities at much higher prices. The net result was that the consumer, for whose benefit the controls were ostensibly imposed, not only paid a very high price for his shirts, but had to accept something that he did not want. This is not an unusual case; it is the normal way in which price control operates. The controls produce shortages, and the consumers are then forced to pay high prices for unsatisfactory substitutes. Samuelson makes a comment which reveals some of the thinking that lies behind the seemingly inexplicable advocacy of price control by so many of the economists: the very group who ought to be most aware of its futility. Following his discussion of the sugar illustration and related items, he tells us, “the breakdown of the price mechanism during war gives us a new understanding of its remarkable efficiency in normal times.”148 It is not entirely clear whether it is the abnormal rise in the price of sugar and other scarce commodities that he calls a “breakdown,” or whether it is the general rise in the price level, but in either case he is accusing the price mechanism of breaking down when, in fact, it is doing exactly what it is supposed to do, and what should be done in the best interests of the economy. When the government is pumping large amounts of credit money into the markets, as it did in World War II, prices must rise enough to absorb the additional money purchasing power. The function of the price mechanism is to cause the necessary price increase to take place and to allocate it among the various goods in accordance with the individual supply and demand situations. The mechanism simply responds automatically to the actions which are taken with respect to the purchasing power flow; it is not a device for holding down the cost of living. In order to prevent a rise in the general price level, if this seems desirable, measures must be taken to draw off the excess money purchasing power and either liquidate it or immobilize it for the time being. If Samuelson’s diagnosis of a “breakdown” refers to the greater-than-average rise in the prices of certain commodities such as sugar, he is equally wrong in his conclusions, as the price mechanism is doing its job here; it is reducing the demand for these scarce items and increasing the supply. The price rise will force some consumers to reduce their use of these commodities, of course, but when productive facilities are diverted from civilian use to war purposes, the consumers must reduce their standard of living in one way or another. Someone must use less of the scarce items. The truth is that the price system does its job in wartime with the same “remarkable efficiency” as in times of peace It does what has to be done when the results are unwelcome, as well as when they are more to our liking. Samuelson and his colleagues are blaming the price system for results that are due to government financial policies. Under some circumstances control over the prices of certain individual items is justified as a means of preventing the producers or owners of commodities in short supply from taking undue advantage of the supply situation. Control over the prices of automobile tires during World War II, for example, was entirely in order. But it should be realized that the consumers were not benefitted in any way by the fixing of tire prices. Whatever they saved in the cost of tires simply added to the amount of money purchasing power available for the purchase of the limited amount of other goods allocated to civilian use, and thus raised the prices of these other goods. Price control for the purpose of preventing excessive windfall gains is sound practice, but price control for the purpose of holding down the cost of living is futile. The contention will no doubt be raised that the savings to the consumer by reason of controlled prices of sugar, tires, etc., will not necessarily be plowed back into the markets. But savings deposited in a bank are loaned to other individuals and spent. Most types of investment involve purchases in some market. Thus savings applied in either of these ways remain in the active purchasing power stream. It is true that the amount which is saved may be applied to the purchase of government bonds, in which case it is removed from the stream flowing to the markets. However, it should be remembered that the only excuse for price control is the existence of a period of economic stress, in which the general standard of living has to be reduced. Under the circumstances it is not likely that any more than a relatively small fraction of the decrease in outlay for the controlled commodities will be applied toward purchase of government securities. We cannot expect the “poor,” to whom Samuelson refers, to buy war bonds with what they save on the cost of sugar. Even in the short run situation, therefore, price control has little effect in reducing the purchasing power flow. In the long run, any “saving” that is made by purchase of government bonds, hoarding of money, or other input into the money and credit reservoirs, is entirely illusory, so far as the consuming public is concerned. Such “savings” never enable consumers as a whole to buy any additional goods. The so-called saving by accumulation of government credit instruments merely postpones the price rise for a time. The only thing that these savings can do, when and if they are used, is to raise prices. In general, wartime price control and rationing should be applied in conjunction, if they are used at all. If rationing of a commodity is required, control of the price of that commodity is practically essential, not because this does the consumer any good, as it does not, but to prevent some individuals from getting undeserved windfalls at the expense of producers of other goods. Conversely, if the supply situation is not serious enough to necessitate rationing there is no justification for price control. In fact, the necessity for rationing is all too often a result of scarcities caused by price controls In the case of non-commodity items, such as rentals, the criterion should be whether or not the normal increase of supply in response to a higher demand is prevented by restrictions on new construction or other government actions, Where the control of prices is justified on this basis, however, the price should never be set below the amount which conforms to the general price level. For example, if the pre-war rental of a house was$300 per month, and the general price level increases 20 percent, the controlled rent should be raised to $360 per month. Failure to keep pace with inflation is the most common mistake in the administration of rent control. It is, of course, due to the popular misconception that rent control helps to hold down the cost of living, and this futile attempt to evade the realities of wartime economics has some very unfortunate collateral effects. One is that the attempt to prevent the landlords from taking undue advantage of the housing shortage goes to the other extreme and does them a serious injustice. If their rentals are not allowed to share in the inflationary price rise, they are, in effect, being compelled to reduce their rents, as the true value of$300 in pre-war money is reduced to \$250 by a 20 percent inflation. Furthermore, when the conflict finally comes to an end, the nation that has adopted rent control is faced with a dilemma. If the controls are lifted and rents suddenly increase to levels consistent with the inflated general average of prices, there will be an outcry from those who have to pay more. Consequently, there will be strong political pressure for maintaining the controls and keeping the rents down. But if the controls are continued, building of new homes for rental purposes will be unprofitable, and the housing shortage that developed during the war will continue.
This was not a very difficult problem in the United States, where the controls during World War II were limited, and where such a large part of the new housing construction is for sale rather than for rent, but it created some serious situations in other countries-France, for instance. In the words of a European observer quoted by Samuelson and Nordhaus, “Nothing is as efficient in destroying a city as rent control-except for bombing.”149 The best way of handling the price situation during a war is to prevent any inflation from occurring, but if some rise in the price level is permitted to take place, as a by-product of the wartime wage policies perhaps, any prices that are controlled should be periodically adjusted to conform to the new general price level.
There is a rather widespread impression that price control did have an effect in holding down the cost of living during the two world wars, but this conclusion is based on a distorted view of the effect that the controlled prices exert on the prices of uncontrolled items. As indicated in the statements quoted in the discussion of the wartime price of sugar, it is widely believed that an increase in some prices tends to cause increases in other prices, and that controls over some prices therefore hold down the general price level. Such a viewpoint is clearly implied in Moulton’s assertion that “the regulating agencies in due course performed a national service of first importance in pegging prices at substantially lower levels.”144 But his own statistics show that during the 13 month period to which he refers, the price level of uncontrolled commodities rose 25 percent.
Furthermore, the inability of the price indexes, upon which the inflation statistics are based, to reflect the kind of indirect cost increases that are so common in wartime, or under other abnormal conditions, is notorious. “This [the B.L.S. Index] does not include or make sufficient allowance for various intangibles, such as forced trading up because of shortages or deterioration of low-priced lines, general lowering of quality of the merchandise, and elimination of many of the conveniences and services connected with its distribution,” say the analysts of the Department of Commerce. The conclusion of these analysts in 1945, at the end of World War II, was that prices for such things as food and clothing, items that account for over half of the consumer budget, were not much different from what they would have been without controls.150
The statistical evidence definitely corroborates the conclusion which we necessarily reach from a consideration of the flow of purchasing power; that is, holding down the prices of specific items simply raises the prices of other goods, while at the same time it introduces economic forces that work directly against the primary objective of the control program. We get nothing tangible in return for putting up with “the absenteeism, the unpenalized inefficiencies, the padded personnel in plants, the upgrading for pricing and downgrading for quality and service, the queues, the bottlenecks, the misdirection of resources, the armies of controllers and regulators and inspectors, associated with suppressed inflation.”.148
The only way in which prices can be held at the equilibrium level (the level established by production costs) is to prevent any excess of money purchasing power from reaching the markets. If price control measures accomplished anything at all toward holding down the general price level during the wars, which is very doubtful, particularly in view of all of the waste and inefficiency that they fostered, this could only have taken place indirectly by inducing consumers to spend less and invest the saving in war bonds or other government securities. Whether or not any such effect was actually generated is hard to determine, but in any event there are obviously more efficient and effective methods of accomplishing this diversion of purchasing power from the markets. Price control for the purpose of holding down the cost of living is a futile and costly economic mistake at any time, whether in war or in peace.
International Society of Unified Science
Reciprocal System Research Society
Salt Lake City, UT 84106
USA
Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer |
## Differential Geometry in Modal HoTT
As some of you might remember, back in 2015 at the meeting of the german mathematical society in Hamburg, Urs Schreiber presented three problems or “exercises” as he called it back then. There is a page about that on the nLab, if you want to know more. In this post, I will sketch a solution to some part of the first of these problems, while the occasion of writing it is a new version of my article about this, which now comes with a long introduction.
Urs Schreiber’s problems were all about formalizing results in higher differential geometry, that make also sense in the quite abstract setting of differential cohesive toposes and cohesive toposes.
A differential cohesive topos is a topos with some extra structure given by three monads and three comonads with some nice properties and adjunctions between them. There is some work concerned with having this structure in homotopy type theory. A specialized cohesive homotopy type theory concerned with three of the six (co-)monads, called real-cohesive homotopy type theory was introduced by Mike Shulman.
What I want to sketch here today is concerned only with one of the monads of differential cohesion. I will call this monad coreduction and denote it with $\Im$. By the axioms of differential cohesion, it has a left and a right adjoint and is idempotent. These properties are more than enough to model a monadic modality in homotopy type theory. Monadic modalities were already defined at the end of section 7 in the HoTT-Book and named just “modalities” and it is possible to have a homotopy type theory with a monadic modality just by adding some axioms — which is known not to work for non-trivial comonadic modalities.
So let us assume that $\Im$ is a monadic modality in HoTT. That means that we have a map $\Im:\mathcal U\to \mathcal U$ and a unit
$\iota:\prod_{X:\mathcal U} X\to \Im X$
such that a property holds, that I won’t really go into in this post — but here it is for completeness: For any dependent type $E:\Im X\to\mathcal U$ on some type $X$, such that the unit maps $\iota_{E(x)}$ are equivalences for all $x:X$, the map
$\_\circ\iota_X:\left(\prod_{x:\Im X}E(x)\right)\to\prod_{x:X}E(\iota_X(x))$
is an equivalence. So the inverse to this map is an induction principle, that only holds for dependent types subject to the condition above.
The n-truncations and double negation are examples of monadic modalities.
At this point (or earlier), one might ask: “Where is the differential geometry”? The answer is that in this setting, all types carry differential geometric structure that is accessible via $\Im$ and $\iota$. This makes sense if we think of some very special interpretations of $\Im$ and $\iota$ (and HoTT), where the unit $\iota_X$ is given as the quotient map from a space $X$ to its quotient $\Im X$ by a relation that identifies infinitesimally close points in $X$.
Since we have this abstract monadic modality, we can turn this around and define the notion of two points $x,y:X$ being infinitesimally close, denoted “$x\sim y$” in terms of the units:
$(x\sim y) :\equiv (\iota_X(x)=\iota_X(y))$
where “$\_=\_$” denotes the identity type (of $\Im X$ in this case). The collection of all points y in a type X that are infinitesimally close to a fixed x in X, is called the formal disk at x. Let us denote it with $D_x$:
$D_x:\equiv \sum_{y:X}y\sim x$
Using some basic properties of monadic modalities, one can show, that any map $f:X\to Y$ preserves inifinitesimal closeness, i.e.
$\prod_{x,y:X}(x\sim y)\to (f(x)\sim f(y))$
is inhabited. For any x in A, we can use this to get a map
$df_x:D_x\to D_{f(x)}$
which behaves a lot like the differential of a smooth function. For example, the chain rule holds
$d(f\circ g)_x = df_{g(x)}\circ dg_x$
and if f is an equivalence, all induced $df_x$ are also equivalences. The latter corresponds to the fact that the differential of a diffeomorphism is invertible.
If we have a 0-group G with unit e, the left tranlations $g\cdot\_:\equiv x\mapsto g\cdot x$ are a family of equivalences that consistently identify $D_e$ with all other formal disks $D_x$ in G given by the differentials $d(g\cdot\_)_e$.
This is essentially a generalization of the fact, that the tangent bundle of a Lie-group is trivialized by left translations and a solution to the first part of the first of Urs Schreiber’s problems I mentioned in the beginning.
With the exception of the chain rule, all of this was in my dissertation, which I defended in 2017. A couple of month ago, I wrote an article about this and put it on the arxiv and since monday, there is an improved version with an introduction that explains what monads $\Im$ you can think of and relates the setup to Synthetic Differential Geometry.
There is also a recording on youtube of a talk I gave about this in Bonn.
Posted in Uncategorized | 8 Comments
## HoTT 2019
Save the date! Next summer will be the first:
International Conference on Homotopy Type Theory
(HoTT 2019)
Carnegie Mellon University
12 – 17 August 2019
There will also be an associated:
HoTT Summer School
7 – 10 August 2019
More details to follow soon!
Here is the conference website.
Posted in News | 6 Comments
## UF-IAS-2012 wiki archived
The wiki used for the 2012-2013 Univalent Foundations program at the Institute for Advanced Study was hosted at a provider called Wikispaces. After the program was over, the wiki was no longer used, but was kept around for historical and archival purposes; much of it is out of date, but it still contains some content that hasn’t been reproduced anywhere else.
Unfortunately, Wikispaces is closing, so the UF-IAS-2012 wiki will no longer be accessible there. With the help of Richard Williamson, we have migrated all of its content to a new archival copy hosted on the nLab server:
Let us know if you find any formatting or other problems.
Posted in News | 2 Comments
## A self-contained, brief and complete formulation of Voevodsky’s univalence axiom
I have often seen competent mathematicians and logicians, outside our circle, making technically erroneous comments about the univalence axiom, in conversations, in talks, and even in public material, in journals or the web.
For some time I was a bit upset about this. But maybe this is our fault, by often trying to explain univalence only imprecisely, mixing the explanation of the models with the explanation of the underlying Martin-Löf type theory, with none of the two explained sufficiently precisely.
There are long, precise explanations such as the HoTT book, for example, or the various formalizations in Coq, Agda and Lean.
But perhaps we don’t have publicly available material with a self-contained, brief and complete formulation of univalence, so that interested mathematicians and logicians can try to contemplate the axiom in a fully defined form.
So here is an attempt of a self-contained, brief and complete formulation of Voevodsky’s Univalence Axiom in the arxiv.
This has an Agda file with univalence defined from scratch as an ancillary file, without the use of any library at all, to try to show what the length of a self-contained definition of the univalence type is. Perhaps somebody should add a Coq “version from scratch” of this.
There is also a web version UnivalenceFromScratch to try to make this as accessible as possible, with the text and the Agda code together.
The above notes explain the univalence axiom only. Regarding its role, we recommend Dan Grayson’s introduction to univalent foundations for mathematicians.
Posted in Uncategorized | Leave a comment
## HoTT at JMM
At the 2018 U.S. Joint Mathematics Meetings in San Diego, there will be an AMS special session about homotopy type theory. It’s a continuation of the HoTT MRC that took place this summer, organized by some of the participants to especially showcase the work done during and after the MRC workshop. Following is the announcement from the organizers.
We are pleased to announce the AMS Special Session on Homotopy Type Theory, to be held on January 11, 2018 in San Diego, California, as part of the Joint Mathematics Meetings (to be held January 10 – 13).
Homotopy Type Theory (HoTT) is a new field of study that relates constructive type theory to abstract homotopy theory. Types are regarded as synthetic spaces of arbitrary dimension and type equality as homotopy equivalence. Experience has shown that HoTT is able to represent many mathematical objects of independent interest in a direct and natural way. Its foundations in constructive type theory permit the statement and proof of theorems about these objects within HoTT itself, enabling formalization in proof assistants and providing a constructive foundation for other branches of mathematics.
This Special Session is affiliated with the AMS Mathematics Research Communities (MRC) workshop for early-career researchers in Homotopy Type Theory organized by Dan Christensen, Chris Kapulkin, Dan Licata, Emily Riehl and Mike Shulman, which took place last June.
The Special Session will include talks by MRC participants, as well as by senior researchers in the field, on various aspects of higher-dimensional type theory including categorical semantics, computation, and the formalization of mathematical theories. There will also be a panel discussion featuring distinguished experts from the field.
Further information about the Special Session, including a schedule and abstracts, can be found at: http://jointmathematicsmeetings.org/meetings/national/jmm2018/2197_program_ss14.html.
Please note that the early registration deadline is December 20, 2017.
If you have any questions about about the Special Session, please feel free to contact one of the organizers. We look forward to seeing you in San Diego.
Simon Cho (University of Michigan)
Liron Cohen (Cornell University)
Ed Morehouse (Wesleyan University)
Posted in News, Publicity | Leave a comment
## Impredicative Encodings of Inductive Types in HoTT
I recently completed my master’s thesis under the supervision of Steve Awodey and Jonas Frey. A copy can be found here.
Known impredicative encodings of various inductive types in System F, such as the type
$\forall X. (X\rightarrow X) \rightarrow X \rightarrow X,$
of natural numbers do not satisfy the relevant $\eta$-computation rules. The aim of this work is to refine the System F encodings by moving to a system of HoTT with an impredicative universe, so that the relevant $\eta$-rules are satisfied (along with all the other rules). As a result, the so-determined types have their expected universal properties. The main result is the construction of a type of natural numbers which is the initial algebra for the expected endofunctor $X\mapsto X+\mathbf{1}$.
For the inductive types treated in the thesis, we do not use the full power of HoTT; we need only postulate $\Sigma$-types, identity types, “large” $\Pi$-types over an impredicative universe $\mathcal{U}$ and function extensionality. Having large $\Pi$-types over an impredicative universe $\mathcal{U}$ means that given a type $\Gamma\vdash A \:\mathsf{type}$ and a type family $\Gamma, x:A \vdash B:\mathcal{U}$, we may form the dependent function type
$\displaystyle{ \Gamma\vdash \prod_{x:A} B:\mathcal{U}}.$
Note that this type is in $\mathcal{U}$ even if $A$ is not.
We obtain a translation of System F types into type theory by replacing second order quantification by dependent products over $\mathcal U$ (or alternatively over the subtype of $\mathcal{U}$ given by some h-level).
For brevity, we will focus on the construction of the natural numbers (though in the thesis, the coproduct of sets and the unit type is first treated with special cases of this method). We consider categories of algebras for endofunctors:
$T:\mathbf{Set}\rightarrow\mathbf{Set},$
where the type of objects of $\mathbf{Set}$ is given by
$\mathsf{Set} :\equiv \displaystyle{\sum_{X:\mathcal{U}}}\mathsf{isSet}(X),$
(the type of sets (in $\mathcal{U}$)) and morphisms are simply functions between sets.
We can write down the type of $T$-algebras:
$\mathsf{TAlg} :\equiv \displaystyle{\sum_{X:\mathsf{Set}}} T(X)\rightarrow X$
and homomorphisms between algebras $\phi$ and $\psi$:
$\mathsf{THom}(\phi,\psi) :\equiv \displaystyle{\sum_{f:\mathsf{pr_1}(\phi)\rightarrow\mathsf{pr_1}(\psi)}} \mathsf{pr_2}(\psi) \circ T(f) = f \circ \mathsf{pr_2}(\phi),$
which together form the category $\mathbf{TAlg}$.
We seek the initial object in $\mathbf{TAlg}$. Denote this by $0$ and moreover let $U$ be the forgetful functor to $\mathbf{Set}$ and $y:\mathbf{TAlg}^{\textnormal{op}}\rightarrow \mathbf{Set}^{\mathbf{TAlg}}$ be the covariant Yoneda embedding. We reason as follows:
$U0 \cong \textnormal{Hom}_{\mathbf{Set}^\mathbf{TAlg}}(y0,U) \\{}\:\,\,\,\,\,\, = \textnormal{Hom}_{\mathbf{Set}^\mathbf{TAlg}}(1,U) \\{}\:\,\,\,\,\,\, = \textnormal{Hom}_{\mathbf{Set}^\mathbf{TAlg}}(\Delta 1,U) \\{}\:\,\,\,\,\,\, = \textnormal{Hom}_{\mathbf{Set}}(1, \textnormal{lim}_{\phi\in\textbf{TAlg}} U\phi) \\{}\:\,\,\,\,\,\, \cong \textnormal{lim}_{\phi\in\textbf{TAlg}} U\phi,$
using the fact that the diagonal functor is left adjoint to the limit functor for the last step. With this, we have a proposal for the definition of the underlying set of the initial $T$-algebra as the limit of the forgetful functor. Using the fact that $U0$ is defined as a limit, we obtain an algebra structure $\epsilon:TU0\rightarrow U0$. As $U$ creates limits, $(U0,\epsilon)$ is guaranteed to be initial in $\mathbf{TAlg}$.
But we want to define $U0$ in type theory. We do this using products and equalizers as is well known from category theory. Explicitly, we take the equalizer of the following two maps between products:
$P_1,P_2 : \left(\displaystyle{\prod_{\phi:\mathbf{TAlg}}}U(\phi)\right) \rightarrow \displaystyle{\prod_{\phi,\psi:\mathbf{TAlg}}} \: \displaystyle{\prod_{\mu:\mathbf{THom}(\phi,\psi)}}U(\psi),$
given by:
$P_1 :\equiv \lambda\Phi.\lambda\phi.\lambda\psi.\lambda\mu.\Phi(\psi), \\ P_1 :\equiv \lambda\Phi.\lambda\phi.\lambda\psi.\lambda\mu. \mathsf{pr_1}(\mu)(\Phi(\phi)).$
The equalizer is, of course:
$E :\equiv \displaystyle{\sum_{\Phi : \prod_{(\phi:\mathbf{TAlg})} U(\phi)}} P_1(\Phi)=P_2(\Phi),$
which inhabits $\mathsf{Set}$. Impredicativity is crucial for this: it guarantees that the product over $\mathbf{TAlg}$ lands in $\mathcal{U}$.
This method can be used to construct an initial algebra, and therefore a fixed-point, for any endofunctor $T : \mathsf{Set}\rightarrow\mathsf{Set}\,$! We won’t pursue this remarkable fact here, but only consider the case at hand, where the functor $T$ is $X\mapsto X+\mathbf{1}$. Then the equalizer $E$ becomes our definition of the type of natural numbers (so let us rename $E$ to $\mathbb{N}$ for the remainder). Observe that this encoding can be seen as a subtype of (a translation of) the System F encoding given at the start. Indeed, the indexing object $\prod_{(\phi:\mathbf{TAlg})} U(\phi)$ of $E$ is equivalent to $\prod_{(X:\mathbf{Set})}(X\rightarrow X)\rightarrow X \rightarrow X$, by
\begin{aligned} \quad\quad\displaystyle{\prod_{\phi:\mathbf{TAlg}}} U(\phi) \quad &\cong\quad \displaystyle{\prod_{\phi:{\displaystyle{\sum_{X:\mathsf{Set}}} T(X)\rightarrow X}}} U(\phi)\\ &\cong\quad \displaystyle{\prod_{X:\mathsf{Set}}}\, \displaystyle{\prod_{f:T(X)\rightarrow X}} X\\ &\cong\quad \displaystyle{\prod_{X:\mathsf{Set}}}\, (T(X)\rightarrow X) \rightarrow X\\ &\cong\quad \displaystyle{\prod_{X:\mathbf{Set}}}(X\rightarrow X)\rightarrow X \rightarrow X \,. \end{aligned}
With this, we can define a successor function and zero element, for instance:
$0 :\equiv \left( \lambda\phi. \mathsf{pr_2}(\phi)\mathsf{inr}(\star), \lambda\phi.\lambda\psi.\lambda\mu. \mathsf{refl}_{\mathsf{pr_2}(\psi)\mathsf{inr}(\star)}\right)$
(the successor function takes a little more work). We can also define a recursor $\mathsf{rec}_{\mathbb{N}}(e,c)$, given any $C:\mathsf{Set}, e:C\rightarrow C$ and $c:C$. In other words, the introduction rules hold, and we can eliminate into other sets. Further, the $\beta$-rules hold definitionally – as expected, since they hold for the System F encodings.
Finally we come to the desired result, the $\eta$-rule for $\mathbb{N}$:
Theorem. Let $C:\mathsf{Set}, e:C\rightarrow C$ and $c:C$. Moreover, let $f:\mathbb{N}\rightarrow C$ such that:
$f(0)=c, \\ f(\mathsf{succ}(x) = e(f(x))$
for any $x:\mathbb{N}$. Then
$\mathsf{rec}_{\mathbb{N}}(e,c) =f.$
Note that the $\eta$-rule holds propositionally. By Awodey, Gambino, and Sojakova we therefore also have, equivalently, the induction principle for $\mathbb{N}$, aka the dependent elimination rule. As a corollary, we can prove the universal property that any $T$-algebra homomorphism is propositionally equal to the appropriate recursor (as a $T$-algebra homomorphism). Again we emphasise the need for impredicativity: in the proof of $\eta$, we have to be able to plug $\mathbb{N}$ into quantifiers over $\mathsf{Set}$.
A semantic rendering of the above is that we have built a type that always determines a natural numbers object—whereas the System F encoding need not always do so (see Rummelhoff). In an appendix, we discuss a realizability semantics for the system we work in. Building more exotic types (that need not be sets) becomes more complicated; we leave this to future work.
Posted in Applications, Foundations, Uncategorized | 5 Comments
## In memoriam: Vladimir Voevodsky
Posted in Uncategorized | Leave a comment |
# Discrete Fourier Transform frequency bin size
I want to ask a simple maybe stupid question, why in DFT the frequency bin size is limited to n/2+1, http://support.ircam.fr/docs/AudioSculpt/3.0/co/FFT%20Size.html. What I see from Wikipedia and also in complex discrete fourier transform, the sum should be from 0 -> n-1, here n is the sample size.
The sum in the definition of a length $N$ DFT always goes from $n=0$ to $n=N-1$:
$$X[k]=\sum_{n=0}^{N-1}x[n]e^{-j2\pi nk/N},\qquad k=0,1,\dots,N-1\tag{1}$$
However, if the sequence $x[n]$ is real-valued, which is the case for most applications, then the $N$ DFT bins $X[k]$ are not independent of each other:
$$X[k]=X^*[N-k],\qquad k=0,1,\dots,N-1\tag{2}$$
So for even $N$, only the first $N/2+1$ bins carry information, the remaining $N/2-1$ bins can be computed from $(2)$.
Note that for real-valued $x[n]$, the values $X[0]$ and $X[N/2]$ are real-valued, and the remaining $N/2-1$ values for $k=1,2,\ldots,N/2-1$ are generally complex-valued. This means that $N$ real-valued samples $x[n]$ are represented by $N$ real-valued numbers in the frequency domain (2 real numbers plus $N/2-1$ complex numbers). This shouldn't come as a surprise. |
### Prufer modules over Leavitt path algebras
Thời gian: 14:00 đến 15:30 ngày 15/05/2018, 14:00 đến 15:30 ngày 22/05/2018, 09:30 đến 11:00 ngày 24/05/2018,
Địa điểm: C2-714, VIASM
Báo cáo viên: Gene Abrams
Tóm tắt:
For a prime $p$ in $\mathbb{Z}$, the well-known and well-studied abelian group $G = \mathbb{Z}({p^\infty})$ can be constructed. Effectively, $G$ is the direct union of the groups $\mathbb{Z}/p\mathbb{Z} \subseteq \mathbb{Z}/p^2\mathbb{Z}\subseteq \cdots$. \ $G$ is called the {\it Pr\"{u}fer} $p$-group. $G$ is divisible, and therefore injective as a $\mathbb{Z}$-module. Moreover, the only proper subgroups of $G$ are precisely these $\mathbb{Z}/p^i\mathbb{Z}$.
Now let $E$ be a finite graph and $c$ a cycle in $E$. Starting with the Chen simple $L_K(E)$-module $V_{[c^\infty]}$, we mimic the $\mathbb{Z}({p^\infty})$ construction to produce a direct limit of $L_K(E)$-modules, which we call a {\it Pr\"{u}fer module}, and denote by $U_{E,c-1}$. In this talk I'll present some properties of these Pr\"{u}fer modules. Specifically, we give necessary and sufficient conditions on $c$ so that $U_{E,c-1}$ is injective. We also describe the endomorphism ring of $U_{E,c-1}$. (This is joint work with F. Mantese and A. Tonolo) |
# Orthogonal transformations of random vectors and statistical independence
In this old CV post, there is the statement
"(...) I have also shown the transformations to preserve the independence, as the transformation matrix is orthogonal."
It refers to the $k$-dimensional linear transformation $\mathbf y = \mathbf A \mathbf x$ with the (normally distributed) random variables in $\mathbf x$ being assumed independent (the "orthogonal matrix" is $\mathbf A$).
• Does the statement mean that the elements of $\mathbf y$ are jointly independent? If not, what?
• Does the result hinges on the normality of the $\mathbf x$'s?
• Can somebody provide a proof and/or a literature reference for this result (even if it is restricted to linear transformations of normals?)
Some thoughts: Assume zero means. The variance-covariance matrix of $\mathbf y$ is
$${\rm Var}(\mathbf y) = \mathbf A \mathbf \Sigma \mathbf A'$$
where $\Sigma$ is the diagonal variance-covariance matrix of the $\mathbf x$'s. Now, if the variables in $\mathbf x$ have the same variance, $\sigma^2$, and so $\Sigma = \sigma^2 I$, then
$${\rm Var}(\mathbf y) = \sigma^2 \mathbf A \mathbf A' = \sigma^2 I$$
due to orthogonality of $\mathbf A$.
If moreover the variables in $\mathbf x$ are normally distributed, then the diagonal variance-covariance matrix of $\mathbf y$ is enough for joint independence.
Does then the result holds only in this special case (same variance - normally distributed), or it can be generalized, I wonder... my hunch is that the "same variance" condition cannot be dropped but the "normally distributed" condition can be generalized to "any joint distribution where zero covariance implies independence".
• (1) Are you making a distinction between "jointly independent" and "independent"? (2) Your second and third bullets are answered in many places on this site--a search might help. That's fine, because it narrows your question to the one in the last paragraph. (+1) That sounds remarkably close to assertions made by the Herschel-Maxwell theorem – whuber Sep 18 '15 at 17:00
• @whuber. I think that's the relevant theorem here indeed, thanks. I think I will prepare an answer that details the theorem. it appears to be perhaps the most "natural" characterization of the normal distribution. – Alecos Papadopoulos Sep 18 '15 at 17:20
It generalizes to the case where variances are not the same (heteroskadestic). In that case, the matrix $\Sigma=DD$ where D is a diagonal matrix. You can then eventually reach the conclusion that $Var(y)=\Sigma$. |
### Vlbi
Very-long-baseline interferometry (VLBI) is a type of astronomical interferometry used in radio astronomy. In VLBI a signal from an astronomical radio source, such as a quasar, is collected at multiple radio telescopes on Earth. The distance between the radio telescopes is then calculated using the time difference between the arrivals of the radio signal at different telescopes. This allows observations of an object that are made simultaneously by many radio telescopes to be combined, emulating a telescope with a size equal to the maximum separation between the telescopes.
Data received at each antenna in the array include arrival times from a local atomic clock, such as a hydrogen maser. At a later time, the data are correlated with data from other antennas that recorded the same radio signal, to produce the resulting image. The resolution achievable using interferometry is proportional to the observing frequency. The VLBI technique enables the distance between telescopes to be much greater than that possible with conventional interferometry, which requires antennas to be physically connected by coaxial cable, waveguide, optical fiber, or other type of transmission line. The greater telescope separations are possible in VLBI due to the development of the closure phase imaging technique by Roger Jennison in the 1950s, allowing VLBI to produce images with superior resolution.
VLBI is most well known for imaging distant cosmic radio sources, spacecraft tracking, and for applications in astrometry. However, since the VLBI technique measures the time differences between the arrival of radio waves at separate antennas, it can also be used "in reverse" to perform earth rotation studies, map movements of tectonic plates very precisely (within millimetres), and perform other types of geodesy. Using VLBI in this manner requires large numbers of time difference measurements from distant sources (such as quasars) observed with a global network of antennas over a period of time. |
# Markdown in Visual Studio Online
Visual Studio Online supports common Markdown conventions and Github-flavored extensions. Daring Fireball describes Markdown syntax, and the GitHub-flavored extensions are described at GitHub. Here are some additional pointers on creating Markdown for Visual Studio Online.
# Create a link to another Markdown page
A link is represented in Markdown like this: [text to display](link target). When linking to another Markdown page in the same Git or TFVC repository, the link target can be a relative path or an absolute path in the repository.
• Relative path: [text to display](./target.md)
• Absolute path in Git: [text to display](/folder/target.md)
• Absolute path in TFVC: [text to display](\$/project/folder/target.md)
When the Markdown file is rendered as HTML, all of the headings automatically get ids. The id is the heading text with the spaces replaced by dashes (-) and all lower case. For example, the id of this section is link-to-a-heading-in-the-page.
The Markdown for linking to a header looks like this: [text to display](#heading id).
A link to this section would look like this: [this section](#link-to-a-heading-in-the-page). The id is all lower case, and the link is case sensitive, so be sure to use lower case, even though the heading itself uses upper case.
You can reference headings in another Markdown file, too, like this: [text to display](path to markdown#heading id).
# Insert an image
Inserting an image is a lot like linking to another Markdown page.
![alt text](path to image file)
The path to the image file can be a relative path or the absolute path in Git or TVFC, just like the path to another Markdown file in a link.
# Create a new page
You can create a new page by creating a link to a Markdown page that doesn’t yet exist.
[new page](./newpage.md)
When you click on that link, VSO will prompt you to create the Markdown file and commit it to your repository. |
However, when comparing random effects (xtreg, re cluster()) and pooled OLS with clustered standard errors (reg, cluster()), I have hard time understanding how one should choose between the two. x�WwXS��[�P�Ы�Бf@z�� ҋ#&!Đ� 6dQ���ˊ.���V֊kǮT�uq�77�����{��o��9s��9�wf���r�X,D �"�$,��:!5��C?7_̊��"�h���s͑�5$�g����s��q� Consequentially, it is inappropriate to use the average squared residuals. Consequently, if the standard errors of the elements of b are computed in the usual way, they will inconsistent estimators of the true standard deviations of the elements of b. Clustering errors in Panel Data at the ID level and testing its necessity, How to estimate a fixed effects regression WITH robust standard errors AND instrument variables, Double-clustered standard errors and large panel, R | Robust standard errors in panel regression clustered at level != Group Fixed Effects. For more discussion on this and some benchmarks of R and Stata robust SEs see Fama-MacBeth and Cluster-Robust (by Firm and Time) Standard Errors in R. See also: Clustered standard errors in R using plm (with fixed effects) You can refer to Zeileis (2004) for more details. Or it is also known as the sandwich estimator of variance (because of how the calculation formula looks like). It is meant to help people who have looked at Mitch Petersen's Programming Advice page, but want to use SAS instead of Stata.. Mitch has posted results using a test data set that you can use to compare the output below to see how well they agree. mechanism is clustered. C23, C12 ABSTRACT ... Another estimator of Σ is the clustered (over entities) variance estimator, 11 1 How to draw a seven point star with one path in Adobe Illustrator. Fortunately, the calculation of robust standard errors can help to mitigate this problem. ��������ieJI9I�zGFn%���r���/%NzR�V@ng��Y�ć*�s���f*Ŷ�zmu9�Ngꛊ�BQ�ѡ$���c,˜�E�_hxO�A$�"�,��]�����vh��)A��r۫�,�U� Generalized least squares estimator. They allow for heteroskedasticity and autocorrelated errors within an entity but not correlation across entities. Robust standard errors account for heteroskedasticity in a model’s unexplained variation. Clustered standard errors belong to these type of standard errors. Notice in fact that an OLS with individual effects will be identical to a panel FE model only if standard errors are clustered on individuals, the robust option will not be enough. Since I used the pooled OLS model I have to cluster the standard errors anyway. The following post describes how to use this function to compute clustered standard errors in R: site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. where the elements of S are the squared residuals from the OLS method. endstream The first of these expressions leads to the “clustered” (over entities) variance estimator Σˆcluster = 1 nT n i=1 T t=1 X˜ it uˆ˜ it T s=1 X˜ is uˆ˜ is (10) << /Length 6 0 R /Filter /FlateDecode >> The first of these expressions leads to the “clustered” (over entities) variance estimator Σˆcluster = 1 nT n i=1 T t=1 X˜ it uˆ˜ it T s=1 X˜ is uˆ˜ is (10) I thought, that by clustering on these two dimensions, I would be able to remove serial correlation and heteroskedasticity and as such, the coeffecients would be different from those of OLS? We call these standard errors heteroskedasticity-consistent (HC) standard errors. {\displaystyle {\widehat {\beta }}_{\text{OLS}}=(\mathbb {X} '\mathbb {X} )^{-1}\mathbb {X} '\… Generation of restricted increasing integer sequences. 1 Standard Errors, why should you worry about them ... Heteroskedasticity (i.n.i.d) Now Var (b)=E h X0X i 1 X0ee0X h X0X i 1 = h X0X i 1 E h Estimating robust standard errors in Stata 4.0 resulted in ... Clustered data . << /T1.0 9 0 R /T3.0 12 0 R /T2.0 11 0 R >> >> We illustrate DeepMind just announced a breakthrough in protein folding, what are the consequences? 13 0 obj Answering you question: Cluster Robust is also Heteroskedastic Consistent. Why did George Lucas ban David Prowse (actor of Darth Vader) from appearing at sci-fi conventions? Do I get the heteroskedasticity-robust standard errors from my OLS or WLS regression? That is, if the amount of variation in the outcome variable is correlated with the explanatory variables, robust standard errors can take this correlation into account. HETEROSKEDASTICITY-ROBUST STANDARD ERRORS 159 (T t=1 X˜ itu it) (the second equality arises from the idempotent matrix identity). Clustered errors have two main consequences: they (usually) reduce the precision of ̂, and the standard estimator for the variance of ̂, V [̂] , is (usually) biased downward from the true variance. In practice, heteroskedasticity-robust and clustered standard errors are usually larger than standard errors from regular OLS — however, this is not always the case. One could use information about the within-cluster correlation of errors to %��������� With panel data it's generally wise to cluster on the dimension of the individual effect as both heteroskedasticity and autocorrellation are almost certain to exist in the residuals at the individual level. I a first specification, I am using robust standard errors as I have heteroscedasticity. An Introduction to Robust and Clustered Standard Errors GLM’s and Non-constant Variance RSEs for GLMs This shouldn’t be too unfamiliar. As Wooldridge notes, the heteroskedasticity robust standard errors for this specification are not very different from the non-robust forms, and the test statistics for statistical significance of coefficients are generally unchanged. Jusha Angrist and Jorn Pischke have a nice discussion around that topic in the book Mostly Harmless Econometrics (Chapter 8), Clustered standard errors and robust standard errors, A Practitioner's Guide to Cluster-Robust Inference, “Question closed” notifications experiment results and graduation, MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and 9 UTC…, Why don't my cluster-robust (panel-robust) standard errors match those in Stata? note that both the usual robust (Eicker-Huber-White or EHW) standard errors, and the clustered standard errors (which they call Liang-Zeger or LZ standard errors) can both be correct, it is just that they are correct for different estimands. I think so, yes, but you might want to provide more detail on how you're handling the clustering. Zx�~�,��ג���Ȯ'{#>II���w 2v� �T@0y�oh)�>y���[�d�1��K��7u��n���V��,���� !+���c[0M;k3N��� ��ُu|^�0Ê��@l�Sf endobj Is that a severe problem? About robust and clustered standard errors. Robust standard errors account for heteroskedasticity in a model’s unexplained variation. Clustered standard errors are an additional method to deal with heteroscedastic data. This page shows how to run regressions with fixed effect or clustered standard errors, or Fama-Macbeth regressions in SAS. Why is frequency not measured in db in bode's plot? 2. Since I used the pooled OLS model I have to cluster the standard errors anyway. We illustrate Computing cluster -robust standard errors is a fix for the latter issue. Their gener-alized method of moments{based covariance matrix estimator is an extension of White’s 323 June 2006 JEL No. In the presence of heteroskedasticity, the errors are not IID. About robust and clustered standard errors. upward-biased. Comment: On p. 307, you write that robust standard errors “can be smaller than conventional standard errors for two reasons: the small sample bias we have discussed and their higher sampling variance.” A third reason is that heteroskedasticity can make the conventional s.e. An Introduction to Robust and Clustered Standard Errors Linear Regression with Non-constant Variance Review: Errors and Residuals ... heteroskedasticity. There are several ways to estimate such a HC matrix, and by default vcovHC() estimates the “HC3” one. '$�:����y �rYNb��dHB���(+1bhHىGC. Hence, I was hoping that I can address both issues simultaneously. It is meant to help people who have looked at Mitch Petersen's Programming Advice page, but want to use SAS instead of Stata.. Mitch has posted results using a test data set that you can use to compare the output below to see how well they agree. An Introduction to Robust and Clustered Standard Errors Linear Regression with Non-constant Variance Review: Errors and Residuals ... heteroskedasticity. To learn more, see our tips on writing great answers. Introduction to Robust and Clustered Standard Errors Miguel Sarzosa Department of Economics University of Maryland Econ626: Empirical Microeconomics, 2012. Since standard model testing methods rely on the assumption that there is no correlation between the independent variables and the variance of the dependent variable, the usual standard errors are not very reliable in the presence of heteroskedasticity. Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. Actually, I have run such a regression and detected heteroskedasticity. So, similar to heteroskedasticity-robust standard errors, you want to allow more flexibility in your variance-covariance (VCV) matrix (Recall that the diagonal elements of the VCV matrix are the squared standard errors of your estimated coefficients). For this reason,we often use White's "heteroskedasticity consistent" estimator for the covariance matrix of b, if the presence of heteroskedastic errors is suspected. What happens when the agent faces a state that never before encountered? I was wondering if, when running a regression on panel data, clustered standard errors are already correcting for heteroskedasticity. Standard errors based on this procedure are called (heteroskedasticity) robust standard errors or White-Huber standard errors. plm can be used for obtaining one-way clustered standard errors. All you need to is add the option robust to you regression command. Running a robust regression in Stata 4.0 results in . For further detail on when robust standard errors are smaller than OLS standard errors, see Jorn-Steffen Pische’s response on Mostly Harmless Econometrics’ Q&A blog. If the answer to both is no, one should not adjust the standard errors for clustering, irrespective of whether such an adjustment would change the standard errors. stream Asking for help, clarification, or responding to other answers. Of course, you do not need to use matrix to obtain robust standard errors. Is it more efficient to send a fleet of generation ships or one massive one? Use MathJax to format equations. It only takes a minute to sign up. >> JakubMućk SGHWarsawSchoolofEconomics Jakub MućkAdvanced Applied Econometrics Heteroskedasticity and serial correlation 1 / 45 By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. This video explains how to calculate heteroskedasticity-robust standard errors in Stata. Finally, I verify what I get with robust standard errors provided by STATA. MathJax reference. This procedure is reliable but entirely empirical. How to avoid boats on a mainly oceanic world? endobj Robust standard errors vs clustered standard errors 09 Sep 2015, 09:46. What do I do to get my nine-year old boy off books with pictures and onto books with text content? How to explain the LCM algorithm to an 11 year old? Dear all, I am doing an analysis of the pollution haven effect in the German manufacturing industry. endobj 2 Estimating xed-e ects model The data set Fatality in the package Ecdat cover data for 48 US states over 7 ... Heteroskedasticity-robust standard errors for xed e ects panel data regression. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. However, what happens if I correct for heteroscedasticity by means of clustered standard errors, even though there is prove that the initial results are homoscedastic. Even in the second case, Abadie et al. Who first called natural satellites "moons"? endobj The way to accomplish this is by using clustered standard errors. %PDF-1.3 Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. I was wondering if, when running a regression on panel data, clustered standard errors are already correcting for heteroskedasticity. Many blog articles have demonstrated clustered standard errors, in R, either by writing a function or manually adjusting the degrees of freedom or both (example, example, example and example).These methods give close approximations to the standard Stata results, but they do not do the small sample correction as the Stata does. x(ٳ)�H������?K����"D��U �L� a��\��Ʌ+�����Ĥ��+�~?ب�9 ������% ����. Robust and clustered standard errors. 584 We see that the standard errors are much larger than before! But at least Weighted least squares. ”Robust” standard errors is a technique to obtain unbiased standard errors of OLS coefficients under heteroscedasticity.In contrary to other statistical software, such as R for instance, it is rather simple to calculate robust standard errors in STATA. option, that allows the computation of so-called Rogers or clustered standard errors.2 Another approach to obtain heteroskedasticity- and autocorrelation (up to some lag)-consistent standard errors was developed by Newey and West (1987). << /Length 14 0 R /N 1 /Alternate /DeviceGray /Filter /FlateDecode >> Ecclesiastical Latin pronunciation of "excelsis": /e/ or /ɛ/? By default vcovHC() estimates a heteroskedasticity consistent (HC) variance covariance matrix for the parameters. I accidentally added a character, and then forgot to write them in for the rest of the series, Panshin's "savage review" of World of Ptavvs. x}��n1��y 7 0 obj 3 0 obj Heteroskedasticity-Robust Standard Errors for Fixed Effects Panel Data Regression James H. Stock and Mark W. Watson NBER Technical Working Paper No. This page shows how to run regressions with fixed effect or clustered standard errors, or Fama-Macbeth regressions in SAS. That is, if the amount of variation in the outcome variable is correlated with the explanatory variables, robust standard errors can take this correlation into account. y��\ _ �Թb� eb]�R1��k�$�A\ Clustered standard errors are popular and very easy to compute in some popular packages such as Stata, but how to compute them in R? Convert negadecimal to decimal (and back). Key words: White standard errors, longitudinal data, clustered standard errors JEL codes: C23, C12 1 We thank Alberto Abadie, Gary Chamberlain, Guido Imbens, Doug Staiger, Hal White, and the referees for helpful comments and/or discussions, Mitchell Peterson for providing the data in footnote 2, and Anna Mikusheva for research assistance. Actually, I have run such a regression and detected heteroskedasticity. Clustered errors have two main consequences: they (usually) reduce the precision of ̂, and the standard estimator for the variance of ̂, V [̂] , is (usually) biased downward from the true variance. Second, in general, the standard Liang-Zeger clustering adjustment is conservative unless one Like ) Sarzosa Department of Economics University of Maryland Econ626: Empirical Microeconomics, 2012 I verify what get. Unexplained variation the Huber-White robust standard errors 159 ( t t=1 X˜ itu )! Option robust to you regression command Prowse ( actor of Darth Vader ) from appearing at sci-fi conventions extension. If there are several ways to estimate such a regression and detected heteroskedasticity ways. Sarzosa Department of Economics University of Maryland Econ626: Empirical Microeconomics, 2012 by Stata run! Folding, what are the squared residuals can address both issues simultaneously on this procedure are called heteroskedasticity. On this procedure are called ( heteroskedasticity ) robust standard errors as have... You regression command when running a regression on panel data, clustered standard errors in is. Fortunately, the calculation formula looks like ) and Mark W. Watson NBER Technical Working Paper No to! The calculation formula looks like ) sandwich estimator of variance ( because of how the calculation formula like... Use the average squared clustered standard errors heteroskedasticity from the OLS method regressions in SAS detected heteroskedasticity summary ). Looks like ) privacy policy and cookie policy wondering if, when running regression. And prevent incorrect inferences page shows how to get ANOVA table with robust standard errors anyway this page how! Up with references or personal experience of robust standard errors are not IID method to deal with heteroscedastic data the... A HC matrix, and industry fixed Effects ( the second case, Abadie et al the robust... The heteroskedasticity-robust standard errors by step-by-step with matrix ( because of how the calculation formula looks like ), consistent. Answering you question: cluster robust is also Heteroskedastic consistent t t=1 X˜ itu it (. Cookie policy second case, Abadie et al where the elements on the diagional the. And paste this URL into your RSS reader handling the clustering the option robust to you command. Errors belong to these type of standard errors are already correcting for heteroskedasticity more efficient send! Mainly oceanic world fix for the parameters serial correlation 1 / 45 in the and... Fix for the parameters additional method to deal with heteroscedastic data I can address both simultaneously... Consistent errors are homoscedastic, Heteroskedastic consistent equal to the square root of the matrix. Answer ”, you do not need to use the average squared residuals from the OLS method an exterior.! More details of Maryland Econ626: Empirical Microeconomics, 2012 hence, I have to cluster the standard are... Variance ( because of how the calculation of robust standard errors vs clustered standard errors in R is the summary. Non-Constant variance RSEs for GLMs this shouldn ’ t be too unfamiliar 're!, what are the squared residuals residuals from the OLS method latter issue off books with text?. Hc3 ” one errors as I have run such a regression on panel data, clustered standard provided. Are already correcting for heteroskedasticity vs clustered standard errors is a fix for the latter issue entity not... Errors within an entity but not correlation across entities still unbiased for Molly Roberts robust clustered. With heteroscedastic data run such a HC matrix, and industry fixed Effects panel data, clustered errors... Of Maryland Econ626: Empirical Microeconomics, 2012 © 2020 Stack Exchange Inc ; user contributions licensed cc. From appearing at sci-fi conventions the agent faces a state that never encountered... Actually, I have to cluster the standard errors account for heteroskedasticity a regression and heteroskedasticity. We see that the standard errors: by Dhananjay Ghei serial correlation 1 45... I have run such a regression on panel data, clustered standard errors if there are several different co-variance in... In your data consequentially, it is inappropriate to use matrix to obtain robust standard are! And by default vcovHC ( ) estimates the “ HC3 ” one option robust to you regression command my or! Yes, but you might want to provide more detail on how 're... ) ( the second case, Abadie et al, when running a regression on panel regression. Dear all, I have to cluster the standard errors are equal the... When the agent faces a state that never before encountered that the clustered standard errors heteroskedasticity errors, or Fama-Macbeth in. Or it is inappropriate to use the average squared residuals errors as I have to the... What I get the heteroskedasticity-robust standard errors 159 ( t t=1 X˜ it! Licensed under cc by-sa of how the calculation of robust standard errors anyway OLS model clustered standard errors heteroskedasticity have run such HC... Microeconomics, 2012 a lot of unnecessary overhead, when running a regression on panel data, clustered errors! When running a robust regression in Stata 4.0 results in my number shares... Of robust standard errors path in Adobe Illustrator with heteroscedastic data that never before encountered gener-alized method of {... Cookie policy David Prowse ( actor of Darth Vader ) from appearing at sci-fi conventions Empirical Microeconomics, 2012 regression... Called ( heteroskedasticity ) robust standard errors anyway... heteroskedasticity for heteroskedasticity and autocorrelated errors within entity! The concept of a ( fantasy-style ) dungeon '' originate variance RSEs for GLMs shouldn. Can a company reduce my number of shares this is by using clustered standard errors Linear with! Squared residuals from the OLS method this URL into your RSS reader robust. Privacy policy and cookie policy and cookie policy... heteroskedasticity or one massive one can help mitigate... Of the covariance matrix for the latter issue specification, I have cluster. Or /ɛ/ called ( heteroskedasticity ) robust standard errors in R is the summary. Allow for heteroskedasticity on this procedure are called ( heteroskedasticity ) robust standard errors might want to provide more on. S are the squared residuals are equal to the conventional summary ( ) estimates a heteroskedasticity consistent ( )... I have run such a regression on panel data, clustered standard errors anyway effect in data. References or personal experience robust to you regression command matrix is E [ hi Yij! Pooled OLS model I have to cluster the standard errors is a fix for the latter issue model I run! Several ways to estimate such a HC matrix, and by default vcovHC )... Linear regression with Non-constant variance Review: errors and residuals... heteroskedasticity function allows you to add an additional to... ( t t=1 X˜ itu it ) ( the second case, Abadie et al agree to our terms service. Doing an analysis of the pollution haven effect in the second case, Abadie et.! Are the squared residuals regression on panel data, clustered standard errors for fixed Effects data... Consistent ( HC ) variance covariance matrix estimator is an extension of ’! Effect or clustered standard errors are biased is the modified summary ( ) function as sandwich... Point star with one path in Adobe Illustrator of robust standard errors to add an additional parameter, called,! More efficient to send a fleet of generation ships or one massive one regression James Stock. This URL into your RSS reader all you need to is add option! To these type of standard errors are not IID fix for the issue! Estimator of variance ( because of how the calculation formula looks like ) with text?... Matrix identity ) the pooled OLS model I have to cluster the standard errors are equal to the root. Vs clustered standard errors are biased type of standard errors specification, I am doing an analysis of elements... Errors vs clustered standard errors a lot of unnecessary overhead 2013 12 / 35 larger than before of a fantasy-style! Elements on the diagional of the covariance matrix do to get my nine-year old boy off books clustered standard errors heteroskedasticity pictures onto. Square root of the pollution haven effect in the data and prevent incorrect inferences regression! Looks like ) algorithm to an 11 year old Paper No what happens when the agent a...: /e/ or /ɛ/ for heteroskedasticity in a model ’ s unexplained variation data James. Of heteroskedasticity, the errors are not IID as the sandwich estimator of variance ( because of how the of! Watson NBER Technical Working Paper No an extension of White ’ s clustered standard errors March. The Fisher information matrix is E [ hi ( Yij ) ] the data and prevent inferences. Design / logo © 2020 Stack Exchange Inc ; user contributions licensed under cc by-sa Vader ) appearing! Their gener-alized method of moments { based covariance matrix for the latter issue ; back them up references. Your RSS reader an exterior point correcting for heteroskedasticity March 6, 2013 12 /.. Exchange Inc ; user contributions licensed under cc by-sa regression with Non-constant variance RSEs for GLMs shouldn... Other answers Microeconomics, 2012 '' originate ( the second case, Abadie al! Are the consequences errors belong to these type clustered standard errors heteroskedasticity standard errors a regression and detected heteroskedasticity help,,! Robust standard errors regression on panel data regression James H. Stock and Mark W. Watson NBER Technical Working Paper.... Efficient to send a fleet of generation ships or one massive one content. Frequency not measured in db in bode 's plot HC3 ” one 09 Sep 2015, 09:46 in a ’. With pictures and onto books with text content unbiased for Molly Roberts robust and clustered standard errors E hi... Consistent errors are much larger than before GLMs this shouldn ’ t be too unfamiliar 159 ( t t=1 itu... Squared residuals OLS method is add the option robust to you regression command Economics University of Maryland Econ626: Microeconomics... Or Fama-Macbeth regressions in SAS: by Dhananjay Ghei James H. Stock Mark! Heteroskedastic consistent faces a state that never before encountered lot of unnecessary overhead the “ HC3 ” one George ban. Variance Review: errors and residuals... heteroskedasticity personal experience the agent faces a state that never encountered. I was hoping that I can address both issues simultaneously second case Abadie! |
## Elementary Statistics (12th Edition)
$z_1=\frac{value-mean}{standard \ deviation}=\frac{133-100}{15}=2.2.$ $z_2=\frac{value-mean}{standard \ deviation}=\frac{79-100}{15}=-1.4.$ Using the table, the value belonging to 2.2: 0.9861, the table, the value belonging to -1.4: 0.0808. 0.9861-0.0808=0.9053. |
## Calculus: Early Transcendentals (2nd Edition)
The equation of the tangent line at the given point is $y=-2x-1$
$y=x^{3}-4x^{2}+2x-1$ $;$ $a=2$ First, evaluate the derivative of the given expression: $y'=3x^{2}-4(2)x+2$ $y'=3x^{2}-8x+2$ Substitute $x$ by $a=2$ in the derivative found to obtain the slope of the tangent line at the given point: $m_{tan}=3(2)^{2}-8(2)+2=3(4)-16+2=...$ $...=12-16+2=-2$ Substitute $x$ by $a=2$ in the original expression to obtain the $y$-coordinate of the point given: $y=2^{3}-4(2)^{2}+2(2)-1=8-4(4)+4-1=...$ $...=8-16+4-1=-5$ The point is $(2,-5)$ The slope of the tangent line and a point through which it passes are now known. Use the point-slope form of the equation of a line, which is $y-y_{1}=m(x-x_{1})$ to obtain the equation of the tangent line at the given point: $y-(-5)=-2(x-2)$ $y+5=-2x+4$ $y=-2x+4-5$ $y=-2x-1$ The graph of both the function and the tangent line are shown in the answer section. |
# Bookshelf
25 January 2012
The Infinity Puzzle: How the quest to understand quantum field theory led to extraordinary science, high politics and the world’s most expensive experiment • Risk – A very short Introduction • Books received
The Infinity Puzzle: How the quest to understand quantum field theory led to extraordinary science, high politics and the world’s most expensive experiment
By Frank Close
Oxford University Press
Hardback: £16.99
Frank Close is a prolific author – Neutrino, Antimatter, Nothing, The New Cosmic Onion, Void, The Particle Odyssey, Lucifer’s Legacy and more, have already appeared this century. The Infinity Puzzle is his ingenious name for the vital but recondite procedure called “renormalization” in physics-speak, but his latest book covers much more ground than just this.
Setting off to trace the evolution of quantum field theory in the 20th century, Close needs to run, leaping from Niels Bohr to Paul Dirac without pausing at Erwin Schrödinger and Werner Heisenberg. However, he occasionally pauses for breath: his descriptions of difficult ideas such as gauge invariance and renormalization are themselves valuable. Equally illuminating are the vivid portraits of some of the players, many major – Abdus Salam, Sheldon Glashow, Gerard ’t Hooft and John Ward – as well as others, such as Ron Shaw, who played smaller roles. Other key contributors, notably Steven Weinberg, appear on the scene unheralded.
The core of the book is the re-emergence in the 1960s of field theory, which had lapsed into disgrace after its initial triumph with quantum electrodynamics. Its new successes came with a unified electroweak theory and with quantum chromodynamics for the strong interactions.
Embedded in this core is a scrutiny of spontaneous symmetry breaking as a physics tool. Here Close presents the series of overlapping contributions that led to the emergence of what is now universally called the “Higgs mechanism”, together with the various claims and counterclaims.
Electroweak unification gained recognition through the Nobel Prize in Physics twice: in 1979 with Glashow, Salam and Weinberg; and in 1999 with ’t Hooft and Martinus Veltman. Having assigned credit where he sees fit, Close also confiscates much of that accorded to Salam, stressing the latter’s keen ambition and political skills to the detriment of enormous contributions to world science. (His International Centre for Theoretical Physics in Trieste was launched with initial support from IAEA, not from UNESCO, as stated in the book.)
In this electroweak saga, Close gives an impression that understanding weak interactions was at the forefront of people’s minds in the mid-1960s, when many were, in fact, initially blinded by the dazzle of group theory for strong interactions and the attendant quark picture. In those days, spontaneous symmetry breaking became muddled with ideas of approximate symmetries of strong interactions. Many struggled to reconcile the lightness of the pion with massless Goldstone bosons. Close mentions Weinberg’s efforts in this direction and the sudden realization that he had been applying the right ideas to the wrong problem.
As the electroweak theory emerged, its protagonists danced round its renormalization problems, whose public resolution came in a 1971 presentation in Amsterdam by ’t Hooft, carefully stage-managed by Veltman, which provides a dramatic prologue to the book. For the strong interactions, Close sees Oxford with Dick Dalitz as a centre of quark-model developments but there was also a colourful quark high priest in the form of Harry Lipkin of the Weizmann Institute.
With the eponymous puzzle resolved, the book concludes with discoveries that confirmed the predictions of field theory redux and the subsequent effort to build big new machines, culminating in the LHC at CERN. The book’s end is just as breathless as its beginning.
The Infinity Puzzle is illustrated with numerous amusing anecdotes, many autobiographical. It displays a great deal of diligent research and required many interviews. At some 400 pages, it is thicker than most of Close’s books. Perhaps this is because there are really two books here. One aims at the big audience that wants to understand what the LHC is and what it does, and will find the detailed field-theory scenarios tedious. On the other hand, those who will be enlightened, if not delighted, by this insight will already know about the LHC and not need explanations of atomic bar codes.
Gordon Fraser, author of Cosmic Anger, a biography of Abdus Salam that is now available in paperback.
Risk – A very short Introduction
By Baruch Fischhoff and John Kadvany
Oxford University Press
Hardback: £7.99
Amazing. A book that should be read by everyone who is still thinking of investing in hedge funds or believing that the stock market is rational. The subject is well explained, covering risk types that we are all familiar with, as well as some that most of us probably never think of as risk. What I especially like is the large number of recent events that are discussed, deep into the year 2011.
The range of human activity covered is vast, and for many areas it is not so much risk as decision making that is discussed. There are many short sentences that were perfectly clear to me but still unexpected such as “people are [deemed] adequately informed when knowing more would not affect their choices”.
The language is clear and pleasant to read, though here and there I sensed that the authors struggled to remain within the “very short” framework. That also means that you should not expect to pick up the 162-page book after dinner and finish it before going to bed. Much of it invites reflection and slow savouring of the ideas, effects and correlations that make risks and deciding about them so intimately intertwined with our human psyche.
A very pleasant book indeed.
Robert Cailliau, Prévessin.
Dark Energy: Theory and Observations
By Luca Amendola and Shinji Tsujikawa
Cambridge University Press
Hardback: £45
Introducing the relevant theoretical ideas, observational methods and results, this textbook is ideally suited to graduate courses on dark energy, as well as supplement advanced cosmology courses. It covers the cosmological constant, quintessence, k-essence, perfect fluid models, extra-dimensional models and modified gravity. Observational research is reviewed, from the cosmic microwave background to baryon acoustic oscillations, weak lensing and cluster abundances.
Neutron Physics for Nuclear Reactors: Unpublished Writings by Enrico Fermi
By S Esposito and O Pisanti (eds.)
World Scientific
Hardback: £76 $111 E-book:$144
This unique volume gives an accurate and detailed description of the functioning and operation of basic nuclear reactors, as emerging from previously unpublished papers by Enrico Fermi. The first part contains the entire course of lectures on neutron physics delivered by Fermi at Los Alamos in 1945, as recorded in notes by Anthony P French. Here, the fundamental physical phenomena are described comprehensively, giving the appropriate physics underlying the functioning of nuclear piles. The second part contains the patents issued by Fermi (and co-workers) on the functioning, construction and operation of several different kinds of nuclear reactor.
Measurements and their Uncertainties: A Practical Guide to Modern Error Analysis
By Ifan Hughes and Thomas Hase
Oxford University Press
Hardback: £39.95 $85 Paperback: £19.95 This hands-on guide is primarily intended to be used in undergraduate laboratories in the physical sciences and engineering. It assumes no prior knowledge of statistics and introduces the necessary concepts where needed. Key points are shown with worked examples and illustrations. In contrast to traditional mathematical treatments, it uses a combination of spreadsheet and calculus-based approaches, suitable as a quick and easy on-the-spot reference. The Nucleon–Nucleon Interaction and the Nuclear Many-Body Problem: Selected Papers of Gerald E Brown and T T S Kuo By Gerald E Brown et al. (eds.) World Scientific Hardback: £87$140
E-book: $182 These selected papers provide a comprehensive overview of some key developments in the understanding of the nucleon–nucleon interaction and nuclear many-body theory. With their influential 1967 paper, Brown and Kuo prepared the effective theory that allowed the description of nuclear properties directly from the underlying nucleon–nucleon interaction. Later, the addition of “Brown-Rho scaling” to the one-boson-exchange model deepened the understanding of nuclear matter saturation, carbon-14 dating and the structure of neutron stars. Weaving the Universe: Is Modern Cosmology Discovered or Invented? By Paul S Wesson World Scientific Hardback: £45$65
E-book: $85 Aimed at a broad audience, Weaving the Universe provides a thorough but short review of the history and current status of ideas in cosmology. The coverage of cosmological ideas focuses on the early 1900s, when Einstein formulated relativity and when Sir Arthur Eddington was creating relativistic models of the universe. It ends with the completion of the LHC in late 2008, after surveying modern ideas of particle physics and astrophysics – weaved together to form a whole account of the universe. Symmetries and Conservation Laws in Particle Physics: An Introduction to Group Therapy for Particle Physics By Stephen Haywood Imperial College Press Hardback: £36$58
Paperback: £17 $28 Group theory provides the language for describing how particles (and in particular, their quantum numbers) combine. This provides understanding of hadronic physics as well as physics beyond the Standard Model. The book examines symmetries and conservation laws in quantum mechanics and relates these to groups of transformations. The symmetries of the Standard Model associated with the electroweak and strong (QCD) forces are described by the groups U(1), SU(2) and SU(3). The properties of these groups are examined and the relevance to particle physics is discussed. Primordial Cosmology By Giovanni Montani, Marco Valerio Battisti, Riccardo Benini and Giovanni Imponente World Scientific Hardback: £123$199
E-book: \$259
In this book the authors provide a self-consistent and complete treatment of the dynamics of the very early universe, passing through a concise discussion of the Standard Cosmological Model, a precise characterization of the role played by the theory of inflation, up to a detailed analysis of the anisotropic and inhomogeneous cosmological models. They trace clearly the backward temporal evolution of the universe, starting with the Robertson–Walker geometry and ending with the recent results of loop quantum cosmology on the “Big Bounce”. |
# Change circle segment depends on diameter
I want to program a tool for checking radial parts of a mesh.
The idea is: You push a button, the scripts add an object (circle) to the scene. Then you grab it place it and when you scale this circle it changes segments depending on the diameter.
I'm stuck on this for now
import bpy
import bmesh
import math
import mathutils
class OPA(bpy.types.Operator):
# Make a new BMesh
bm = bmesh.new()
# Add a circle XXX, should return all geometry created, not just verts.
bmesh.ops.create_circle(
bm,
cap_ends=True,
cap_tris=True,
diameter=2,
segments=8)
# Finish up, write the bmesh into a new mesh
me = bpy.data.meshes.new("Mesh")
bm.to_mesh(me)
bm.free()
# Add the mesh to the scene
scene = bpy.context.scene
obj = bpy.data.objects.new("Object", me)
obj.show_wire = True;
obj.show_all_edges = True;
obj.draw_type = "WIRE";
obj.location = bpy.context.scene.cursor_location
# Select and make active
scene.objects.active = obj
obj.select = True
#while True:
#if obj.select == 'True' and scene.objects.active == obj:
bpy.utils.register_class(OPA) |
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 22 Jul 2018, 17:09
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If the integers a and n are greater than 1 and the product
Author Message
TAGS:
### Hide Tags
Manager
Joined: 15 Nov 2007
Posts: 133
If the integers a and n are greater than 1 and the product [#permalink]
### Show Tags
19 Jan 2008, 15:04
2
7
00:00
Difficulty:
65% (hard)
Question Stats:
61% (01:36) correct 39% (01:45) wrong based on 262 sessions
### HideShow timer Statistics
If the integers a and n are greater than 1 and the product of the first 8 positive integers is a multiple of a^n, what is the value of a ?
(1) a^n = 64
(2) n=6
CEO
Joined: 17 Nov 2007
Posts: 3484
Concentration: Entrepreneurship, Other
Schools: Chicago (Booth) - Class of 2011
GMAT 1: 750 Q50 V40
### Show Tags
19 Jan 2008, 15:10
1
D
the product of the first 8 positive integers is
$$P=1*2*3*4*5*6*7*8=2*3*2^2*5*(3*2)*7*2^3=2^6*3^2*5*7$$
$$2^6=64$$
1. only $$2^6$$ works. suff.
2. only $$2^6$$ works. suff.
_________________
HOT! GMAT TOOLKIT 2 (iOS) / GMAT TOOLKIT (Android) - The OFFICIAL GMAT CLUB PREP APP, a must-have app especially if you aim at 700+ | PrepGame
Manager
Joined: 15 Nov 2007
Posts: 133
### Show Tags
19 Jan 2008, 15:18
walker wrote:
D
the product of the first 8 positive integers is
$$P=1*2*3*4*5*6*7*8=2*3*2^2*5*(3*2)*7*2^3=2^6*3^2*5*7$$
$$2^6=64$$
1. only $$2^6$$ works. suff.
2. only $$2^6$$ works. suff.
Big walker, answer is B according to OA. Plus 1 for answering anyways though (can you take another crack at it for me).
SVP
Joined: 29 Mar 2007
Posts: 2491
### Show Tags
19 Jan 2008, 15:24
1
dominion wrote:
If the integers A and N are greater than 1, and the product of the first 8 positive integers is a multiple of A^N, what is the value of A?
s1: A^N=64
s2: n=6
Thanks!
S1: A can be 2,4,8 thus n is 6,3,2, 8! is a multiple of any of these combinations.
S2: we have n=6. 8! has 8*7*6*5*4*3*2 ---> 2^7*3^2*5*7 n MUST be 2 or 8! won't be a multiple or be divisible by A^6.
Try 4^6 --> 2^12, this doesnt work. 3^6, there aren't enough 3's to cover this.
Thus s2 is suff.
B
VP
Joined: 28 Dec 2005
Posts: 1482
### Show Tags
19 Jan 2008, 15:55
Hey gmatblackbelt, Im not following the logic above for statement 2.
We know that A^6 = (2^7*3^2*5)k ... where do we go from here ?
SVP
Joined: 29 Mar 2007
Posts: 2491
### Show Tags
19 Jan 2008, 16:53
pmenon wrote:
Hey gmatblackbelt, Im not following the logic above for statement 2.
We know that A^6 = (2^7*3^2*5)k ... where do we go from here ?
A^6 does not equal 2^7*3^2*5*7
8! is divisble by A^6 the only way that this could be is for A to equal 2. It cannot equal 1 since the main stem said A and N are bothg greater than 1.
Director
Joined: 03 Sep 2006
Posts: 839
### Show Tags
19 Jan 2008, 22:31
walker wrote:
D
the product of the first 8 positive integers is
$$P=1*2*3*4*5*6*7*8=2*3*2^2*5*(3*2)*7*2^3=2^6*3^2*5*7$$
$$2^6=64$$
1. only $$2^6$$ works. suff.
2. only $$2^6$$ works. suff.
All fine in your approach,except that you miss other possibilities in ( i ), which can be;
$$8^2, 4^3 etc.$$ Thus it's "not only" $$2^6$$ works. suff.
But, if n=6 is fixed as in statement ( ii ), then $$2^6*3^2*5*7$$, it is clear that A = 2.
CEO
Joined: 17 Nov 2007
Posts: 3484
Concentration: Entrepreneurship, Other
Schools: Chicago (Booth) - Class of 2011
GMAT 1: 750 Q50 V40
### Show Tags
20 Jan 2008, 00:12
1
I agree
_________________
HOT! GMAT TOOLKIT 2 (iOS) / GMAT TOOLKIT (Android) - The OFFICIAL GMAT CLUB PREP APP, a must-have app especially if you aim at 700+ | PrepGame
Senior Manager
Affiliations: SPG
Joined: 15 Nov 2006
Posts: 310
### Show Tags
04 Jun 2010, 00:22
How should we solve this?
Attachments
del5.jpg [ 10.37 KiB | Viewed 5301 times ]
_________________
press kudos, if you like the explanation, appreciate the effort or encourage people to respond.
Senior Manager
Joined: 25 Jun 2009
Posts: 286
Re: How to solve this? [#permalink]
### Show Tags
Updated on: 04 Jun 2010, 08:40
dimitri92 wrote:
How should we solve this?
We know $$8! = 2^6 * 3^2 *5 *7$$
From question its given $$a^n *k = 8!$$
From St-1: $$a^n*K = 64$$. $$a$$ can be 2,4 or 8given a value of $$n$$ is greater than 1.
From St-2: $$a^6*K = 8!$$ -> a can only be 2.
Hence B
Originally posted by cipher on 04 Jun 2010, 04:14.
Last edited by cipher on 04 Jun 2010, 08:40, edited 1 time in total.
Math Expert
Joined: 02 Sep 2009
Posts: 47184
Re: How to solve this? [#permalink]
### Show Tags
04 Jun 2010, 08:03
2
dimitri92 wrote:
How should we solve this?
If the integers a and n are greater than 1 and the product of the first 8 positive integers is a multiple of a^n, what is the value of a?
Prime factorization would be the best way for such kind of questions.
Given: $$a^n*k=8!=2^7*3^2*5*7$$. Q: $$a=?$$
(1) $$a^n=64=2^6=4^3=8^2$$, so $$a$$ can be 2, 4, or 8. Not sufficient.
(2) $$n=6$$ --> the only integer (more than 1), which is the factor of 8!, and has the power of 6 (at least) is 2, hence $$a=2$$. Sufficient.
_________________
Math Expert
Joined: 02 Sep 2009
Posts: 47184
### Show Tags
17 Sep 2010, 01:17
1
1
Geronimo wrote:
If the integers a and n are greater than 1 and the product of the first 8 positive integers is a multiple of a^n, what is the value of a ?
(1) a^n = 64
(2) n=6
Prime factorization would be the best way for such kind of questions.
Given: $$a^n*k=8!=2^7*3^2*5*7$$. Q: $$a=?$$
(1) $$a^n=64=2^6=4^3=8^2$$, so $$a$$ can be 2, 4, or 8. Not sufficient.
(2) $$n=6$$ --> the only integer (more than 1), which is the factor of 8!, and has the power of 6 (at least) is 2, hence $$a=2$$. Sufficient.
_________________
Manager
Status: what we want to do, do it as soon as possible
Joined: 24 May 2010
Posts: 91
Location: Vietnam
WE 1: 5.0
Re: How to solve this? [#permalink]
### Show Tags
15 Mar 2011, 23:49
Bunuel wrote:
dimitri92 wrote:
How should we solve this?
(2) $$n=6$$ --> the only integer (more than 1), which is the factor of 8!, and has the power of 6 (at least) is 2, hence $$a=2$$. Sufficient.
Hi bunuel,
Thank you for your instruction. I have another query:
how can we eliminate the possibility that: 8! is not a multiple of 4^6 (which means a#4) quickly?
_________________
Consider giving me kudos if you find my explanations helpful so i can learn how to express ideas to people more understandable.
Math Forum Moderator
Joined: 20 Dec 2010
Posts: 1899
Re: How to solve this? [#permalink]
### Show Tags
16 Mar 2011, 09:53
1
MICKEYXITIN wrote:
Bunuel wrote:
dimitri92 wrote:
How should we solve this?
(2) $$n=6$$ --> the only integer (more than 1), which is the factor of 8!, and has the power of 6 (at least) is 2, hence $$a=2$$. Sufficient.
Hi bunuel,
Thank you for your instruction. I have another query:
how can we eliminate the possibility that: 8! is not a multiple of 4^6 (which means a#4) quickly?
8! has 2^7 as its factor. Thus maximum exponent of 4 is 3; $$(2^2)^3*2=(4)^3*2$$
Look into this for more on factorials;
_________________
Retired Moderator
Joined: 16 Nov 2010
Posts: 1467
Location: United States (IN)
Concentration: Strategy, Technology
Re: How to solve this? [#permalink]
### Show Tags
16 Mar 2011, 20:14
From this we have three possibilities for a^n -> 2^7, 3^2 and 4^3 so a can be 2, 3 or 4
From (1) a^n = 64 => a^n = 2^6 or 4^3, so not sufficient
(2) n = 6, so we can rule out 3 or 4, as 2 is the only number with exponent > 6
_________________
Formula of Life -> Achievement/Potential = k * Happiness (where k is a constant)
GMAT Club Premium Membership - big benefits and savings
Director
Status: Impossible is not a fact. It's an opinion. It's a dare. Impossible is nothing.
Affiliations: University of Chicago Booth School of Business
Joined: 03 Feb 2011
Posts: 793
Re: How to solve this? [#permalink]
### Show Tags
16 Mar 2011, 20:48
I think 3 is ruled out isn't it? all the factorials greater than 5 are Even multiple of 10
Posted from my mobile device
Retired Moderator
Joined: 16 Nov 2010
Posts: 1467
Location: United States (IN)
Concentration: Strategy, Technology
Re: How to solve this? [#permalink]
### Show Tags
16 Mar 2011, 22:36
In (1) 3 raised to any integer can't be 64. In (2), 3 is ruled out because power/exponent of 3 < 6.
_________________
Formula of Life -> Achievement/Potential = k * Happiness (where k is a constant)
GMAT Club Premium Membership - big benefits and savings
GMAT Tutor
Joined: 24 Jun 2008
Posts: 1345
Re: a raised to n [#permalink]
### Show Tags
25 May 2011, 03:15
When looking at Statement 1, we know a^n = 64 and a and n are positive integers greater than 1, so a could be 2, 4 or 8 (since 2^6 = 4^3 = 8^2 = 64). The information in the stem isn't actually important here.
When we look at Statement 2, we know that 8! is divisible by a^6. If we prime factorize 8!, we find:
8! = 8*7*6*5*4*3*2 = (2^3)(7)(2*3)(5)(2^2)(3)(2) = (2^7)(3^2)(5)(7)
We need this prime factorization to be divisible by a^6 where a > 1. Looking at the prime factorization, the only possibility is that a = 2 (since for any other prime p besides 2, we can't divide the factorization above by p^6, nor can we divide the factorization above by (2^2)^6 = 2^12).
_________________
GMAT Tutor in Toronto
If you are looking for online GMAT math tutoring, or if you are interested in buying my advanced Quant books and problem sets, please contact me at ianstewartgmat at gmail.com
Current Student
Joined: 07 Jan 2015
Posts: 68
Location: India
Concentration: Operations
GMAT 1: 710 Q49 V36
Re: If the integers a and n are greater than 1 and the product [#permalink]
### Show Tags
21 Apr 2015, 06:45
Statement (1) : can be multiple combinations, hence insufficient
Statement (2) : According to this, (8/a) + (8/a^2) + (8/a^3)....... = 6 , where a^n<8
For a = 2
8/2 + 8/4 = 4 + 2 = 6 . Hence , a =2 .. Sufficient
Non-Human User
Joined: 09 Sep 2013
Posts: 7330
Re: If the integers a and n are greater than 1 and the product [#permalink]
### Show Tags
21 Oct 2017, 23:01
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: If the integers a and n are greater than 1 and the product &nbs [#permalink] 21 Oct 2017, 23:01
Display posts from previous: Sort by
# Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. |
# What is a polynomial?
#### Everything You Need in One Place
Homework problems? Exam preparation? Trying to grasp a concept or just brushing up the basics? Our extensive help & practice library have got you covered.
#### Learn and Practice With Ease
Our proven video lessons ease you through problems quickly, and you get tonnes of friendly practice on questions that trip students up on tests and finals.
#### Instant and Unlimited Help
Our personalized learning platform enables you to instantly find the exact walkthrough to your specific type of question. Activate unlimited help now!
0/2
##### Examples
###### Lessons
1. Which of the following is a "Polynomial"?
1. $6$
$\sqrt{3} x$
${x^ \frac{1}{3}}$
$4{b^3} - 5b$
$3x + \sqrt{4z}$
${x^{-4}} - 3y$
$\frac{2}{13x} + 4$
2. Classifying polynomials (monomial, binomial, trinomial, or polynomial
1. $x + 2$
$-3$
${x^3} -2x-4$
${x^3}-2 {x^2} +3 x-23$
###### Topic Notes
There are criteria for expressions to be called polynomials. A polynomial needs to have at last 3 algebraic terms. For instance, an expression with negative exponents is not a polynomial. In this lesson, we will also learn how to classify polynomials base on their number of terms. |
Connexions
You are here: Home » Content » The Yukawa Particle and the Heisenberg Uncertainty Principle Revisited
Content endorsed by: OpenStax College
Lenses
What is a lens?
Definition of a lens
Lenses
A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.
What is in a lens?
Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.
Who can create a lens?
Any individual member, a community, or a respected organization.
What are tags?
Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.
Endorsed by (What does "Endorsed by" mean?)
This content has been endorsed by the organizations listed. Click each link for a list of all content endorsed by the organization.
• OpenStax College
This module is included in aLens by: OpenStax CollegeAs a part of collection: "College Physics"
Click the "OpenStax College" link to see all content they endorse.
Affiliated with (What does "Affiliated with" mean?)
This content is either by members of the organizations listed or about topics related to the organizations listed. Click each link to see a list of all content affiliated with the organization.
• Featured Content
This module is included inLens: Connexions Featured Content
By: ConnexionsAs a part of collection: "College Physics"
"This introductory, algebra-based, two-semester college physics book is grounded with real-world examples, illustrations, and explanations to help students grasp key, fundamental physics concepts. […]"
Click the "Featured Content" link to see all content affiliated with them.
Click the tag icon to display tags associated with this content.
Recently Viewed
This feature requires Javascript to be enabled.
Tags
(What is a tag?)
These tags come from the endorsement, affiliation, and other lenses that include this content.
The Yukawa Particle and the Heisenberg Uncertainty Principle Revisited
Module by: OpenStax College. E-mail the author
Summary:
• Define Yukawa particle.
• State the Heisenberg uncertainty principle.
• Describe pion.
• Estimate the mass of a pion.
• Explain meson.
Particle physics as we know it today began with the ideas of Hideki Yukawa in 1935. Physicists had long been concerned with how forces are transmitted, finding the concept of fields, such as electric and magnetic fields to be very useful. A field surrounds an object and carries the force exerted by the object through space. Yukawa was interested in the strong nuclear force in particular and found an ingenious way to explain its short range. His idea is a blend of particles, forces, relativity, and quantum mechanics that is applicable to all forces. Yukawa proposed that force is transmitted by the exchange of particles (called carrier particles). The field consists of these carrier particles.
Specifically for the strong nuclear force, Yukawa proposed that a previously unknown particle, now called a pion, is exchanged between nucleons, transmitting the force between them. Figure 1 illustrates how a pion would carry a force between a proton and a neutron. The pion has mass and can only be created by violating the conservation of mass-energy. This is allowed by the Heisenberg uncertainty principle if it occurs for a sufficiently short period of time. As discussed in Probability: The Heisenberg Uncertainty Principle the Heisenberg uncertainty principle relates the uncertainties ΔEΔE size 12{ΔE} {} in energy and ΔtΔt size 12{Δt} {} in time by
ΔEΔth4π,ΔEΔth4π size 12{ΔEΔt >= { {h} over {4π} } } {},
(1)
where hh size 12{h} {} is Planck’s constant. Therefore, conservation of mass-energy can be violated by an amount ΔEΔE size 12{ΔE} {} for a time Δth4πΔE Δth4πΔE size 12{Δt approx { {h} over {4πΔE} } } {} in which time no process can detect the violation. This allows the temporary creation of a particle of mass mm size 12{m} {}, where ΔE=mc2ΔE=mc2 size 12{ΔE= ital "mc" rSup { size 8{2} } } {}. The larger the mass and the greater the ΔEΔE size 12{ΔE} {}, the shorter is the time it can exist. This means the range of the force is limited, because the particle can only travel a limited distance in a finite amount of time. In fact, the maximum distance is dcΔtdcΔt size 12{d approx cΔt} {}, where c is the speed of light. The pion must then be captured and, thus, cannot be directly observed because that would amount to a permanent violation of mass-energy conservation. Such particles (like the pion above) are called virtual particles, because they cannot be directly observed but their effects can be directly observed. Realizing all this, Yukawa used the information on the range of the strong nuclear force to estimate the mass of the pion, the particle that carries it. The steps of his reasoning are approximately retraced in the following worked example:
Example 1: Calculating the Mass of a Pion
Taking the range of the strong nuclear force to be about 1 fermi (1015m1015m size 12{"10" rSup { size 8{ - "15"} } m} {}), calculate the approximate mass of the pion carrying the force, assuming it moves at nearly the speed of light.
Strategy
The calculation is approximate because of the assumptions made about the range of the force and the speed of the pion, but also because a more accurate calculation would require the sophisticated mathematics of quantum mechanics. Here, we use the Heisenberg uncertainty principle in the simple form stated above, as developed in Probability: The Heisenberg Uncertainty Principle. First, we must calculate the time ΔtΔt size 12{Δt} {} that the pion exists, given that the distance it travels at nearly the speed of light is about 1 fermi. Then, the Heisenberg uncertainty principle can be solved for the energy ΔEΔE size 12{ΔE} {}, and from that the mass of the pion can be determined. We will use the units of MeV/c2MeV/c2 size 12{"MeV"/c rSup { size 8{2} } } {} for mass, which are convenient since we are often considering converting mass to energy and vice versa.
Solution
The distance the pion travels is dcΔtdcΔt, and so the time during which it exists is approximately
Δt d c = 10 15 m 3 . 0 × 10 8 m/s 3.3 × 10 24 s. Δt d c = 10 15 m 3 . 0 × 10 8 m/s 3.3 × 10 24 s. alignl { stack { size 12{Δt approx { {d} over {c} } = { {"10" rSup { size 8{ - "15"} } m} over {3 "." 0 times "10" rSup { size 8{8} } "m/s"} } } {} # " " approx 3 "." 3 times "10" rSup { size 8{ - "24"} } s "." {} } } {}
(2)
Now, solving the Heisenberg uncertainty principle for ΔEΔE size 12{ΔE} {} gives
ΔEh4πΔt6.63×1034Js3.3×1024s.ΔEh4πΔt6.63×1034Js3.3×1024s size 12{ΔE approx { {h} over {4πΔt} } approx { {6 "." "63" times "10" rSup { size 8{ - "34"} } J cdot s} over {4π left (3 "." 3 times "10" rSup { size 8{ - "24"} } s right )} } } {}.
(3)
Solving this and converting the energy to MeV gives
ΔE1.6×1011J1MeV1.6×1013J=100MeV.ΔE1.6×1011J1MeV1.6×1013J=100MeV size 12{ΔE approx left (1 "." 6 times "10" rSup { size 8{ - "11"} } J right ) { {1"MeV"} over {1 "." 6 times "10" rSup { size 8{ - "13"} } J} } ="100""MeV"} {}.
(4)
Mass is related to energy by ΔE=mc2ΔE=mc2 size 12{ΔE= ital "mc" rSup { size 8{2} } } {}, so that the mass of the pion is m=ΔE/c2m=ΔE/c2 size 12{m=ΔE/c rSup { size 8{2} } } {}, or
m100MeV/c2.m100MeV/c2 size 12{m approx "100""MeV/"c rSup { size 8{2} } } {}.
(5)
Discussion
This is about 200 times the mass of an electron and about one-tenth the mass of a nucleon. No such particles were known at the time Yukawa made his bold proposal.
Yukawa’s proposal of particle exchange as the method of force transfer is intriguing. But how can we verify his proposal if we cannot observe the virtual pion directly? If sufficient energy is in a nucleus, it would be possible to free the pion—that is, to create its mass from external energy input. This can be accomplished by collisions of energetic particles with nuclei, but energies greater than 100 MeV are required to conserve both energy and momentum. In 1947, pions were observed in cosmic-ray experiments, which were designed to supply a small flux of high-energy protons that may collide with nuclei. Soon afterward, accelerators of sufficient energy were creating pions in the laboratory under controlled conditions. Three pions were discovered, two with charge and one neutral, and given the symbols π+, π, and π0π+, π, and π0 size 12{π rSup { size 8{+{}} } ,π rSup { size 8{ - {}} } ,"and "π rSup { size 8{0} } } {}, respectively. The masses of π+π+ size 12{π rSup { size 8{+{}} } } {} and ππ size 12{π rSup { size 8{ - {}} } } {} are identical at 139.6MeV/c2139.6MeV/c2 size 12{"139" "." 6"MeV/"c rSup { size 8{2} } } {}, whereas π0π0 size 12{ π rSup { size 8{0} } } {} has a mass of 135.0MeV/c2135.0MeV/c2 size 12{"135" "." 0"MeV/"c rSup { size 8{2} } } {}. These masses are close to the predicted value of 100MeV/c2100MeV/c2 size 12{"100""MeV/"c rSup { size 8{2} } } {} and, since they are intermediate between electron and nucleon masses, the particles are given the name meson (now an entire class of particles, as we shall see in Particles, Patterns, and Conservation Laws).
The pions, or ππ size 12{π} {}-mesons as they are also called, have masses close to those predicted and feel the strong nuclear force. Another previously unknown particle, now called the muon, was discovered during cosmic-ray experiments in 1936 (one of its discoverers, Seth Neddermeyer, also originated the idea of implosion for plutonium bombs). Since the mass of a muon is around 106MeV/c2106MeV/c2 size 12{"106""MeV/"c rSup { size 8{2} } } {}, at first it was thought to be the particle predicted by Yukawa. But it was soon realized that muons do not feel the strong nuclear force and could not be Yukawa’s particle. Their role was unknown, causing the respected physicist I. I. Rabi to comment, “Who ordered that?” This remains a valid question today. We have discovered hundreds of subatomic particles; the roles of some are only partially understood. But there are various patterns and relations to forces that have led to profound insights into nature’s secrets.
Summary
• Yukawa’s idea of virtual particle exchange as the carrier of forces is crucial, with virtual particles being formed in temporary violation of the conservation of mass-energy as allowed by the Heisenberg uncertainty principle.
Problems & Exercises
Exercise 1
A virtual particle having an approximate mass of 1014GeV/c21014GeV/c2 size 12{"10" rSup { size 8{"14"} } "GeV/"c rSup { size 8{2} } } {} may be associated with the unification of the strong and electroweak forces. For what length of time could this virtual particle exist (in temporary violation of the conservation of mass-energy as allowed by the Heisenberg uncertainty principle)?
Solution
3 × 10 39 s 3 × 10 39 s size 12{3 times "10" rSup { size 8{ - "39"} } s} {}
Exercise 2
Calculate the mass in GeV/c2GeV/c2 size 12{"GeV/"c rSup { size 8{2} } } {} of a virtual carrier particle that has a range limited to 10301030 size 12{"10" rSup { size 8{ - "30"} } } {} m by the Heisenberg uncertainty principle. Such a particle might be involved in the unification of the strong and electroweak forces.
Exercise 3
Another component of the strong nuclear force is transmitted by the exchange of virtual K-mesons. Taking K-mesons to have an average mass of 495MeV/c2495MeV/c2 size 12{"495""MeV/"c rSup { size 8{2} } } {}, what is the approximate range of this component of the strong force?
Solution
1 . 99 × 10 16 m ( 0 . 2 fm ) 1 . 99 × 10 16 m ( 0 . 2 fm ) size 12{1 "." "99" times "10" rSup { size 8{ - "16"} } m $$0 "." 2`"fm"$$ } {}
Glossary
pion:
particle exchanged between nucleons, transmitting the force between them
virtual particles:
particles which cannot be directly observed but their effects can be directly observed
meson:
particle whose mass is intermediate between the electron and nucleon masses
Content actions
Give feedback:
My Favorites (?)
'My Favorites' is a special kind of lens which you can use to bookmark modules and collections. 'My Favorites' can only be seen by you, and collections saved in 'My Favorites' can remember the last module you were on. You need an account to use 'My Favorites'.
| A lens I own (?)
Definition of a lens
Lenses
A lens is a custom view of the content in the repository. You can think of it as a fancy kind of list that will let you see content through the eyes of organizations and people you trust.
What is in a lens?
Lens makers point to materials (modules and collections), creating a guide that includes their own comments and descriptive tags about the content.
Who can create a lens?
Any individual member, a community, or a respected organization.
What are tags?
Tags are descriptors added by lens makers to help label content, attaching a vocabulary that is meaningful in the context of the lens.
| External bookmarks |
emacs-orgmode
[Top][All Lists]
## [O] LaTeX export: underscores and the syntax package
From: peter . frings Subject: [O] LaTeX export: underscores and the syntax package Date: Thu, 12 May 2011 16:00:43 +0200
Good afternoon all,
I spent the best part of the afternoon trying to figure out why an
org-generated .tex file wouldn’t compile with my set-up. It turns out that the
syntax’ package messes a bit with the definition of an underscore, making it
impossible to use the underscore in a \label.
Unfortunately, the LaTeX exporter uses underscores in its section labels.
Actually, it is possible to have the _ in \label: use the nounderscore’ option
with the syntax package. But you then do not get the tweaked underscore in
plain text (unless you use \_ again).
Since I’d like to keep the syntax package and the ease of using plain
underscores, I’d like to avoid the _ in labels.
Maybe I didn’t look hard enough, but I didn’t find an obvious way to change the
way org generates the labels. Would it be an option to use the same
label-generating code as AUCTeX mode? That would be very nice!
Thanks,
Peter.
--
c++; // this makes c bigger but returns the old value |
# Tag Info
Accepted
### Could a Dyson fan scale up to be used as a bladeless aircraft engine?
No. Not a useful propulsion engine. The first problem is power. The air stream from Dyson's fans is weaker than what you can get from a conventional fan the same size, and jet engines need a very ...
• 17.6k
Accepted
### Why aren't tilting propellers used as an alternative for ailerons or elevators?
It isn't done because a moving control surfaces is easier to design and build than an engine mount that rotates. Plus the associated structure needed to accommodate the thrust, p-factor, and ...
• 65.3k
### Can a pulse jet be used on a light GA aircraft?
In principle yes. But some details will turn most potential operators off. Noise is the obvious first one. You might not mind, but cockpit noise and vibration in operation will certainly put an ...
• 217k
### What makes an engine suitable for supersonic flight?
Yes. This is more or less exactly what had been planned for the (now defunct) Aerion supersonic business jet. The engine was to be developed by GE, with the core of a CFM56, the fan replaced with a ...
• 5,228
### Are ducted fans more efficient?
there is a lot going on here. Short answer is that a ducted fan (what you have pictured) can produce a lot more thrust (experiments from one paper say twice as much) than an open rotor of the same ...
• 3,551
### Why is max endurance different for jet and props?
Difference between engine types That different engine types cause the optimum loiter speed to be different is due to the amount of air used for thrust creation. Propellers accelerate a lot of air by ...
• 217k
Accepted
### What is a biplane propeller and how efficient is it?
It is essentially two propellers stack on top of each other. You can see it in this picture of a Lazair. As for why they were chosen for the Lazair, Many have asked why Ultraflight opted for ...
• 95.9k
### Would injecting water into a jet engine’s exhaust increase velocity?
Spraying water into the exhaust stream will cool down the exhaust flow. The energy needed to heat and evaporate the water needs to come from somewhere, after all. The cooler and denser exhaust flow ...
• 217k
Accepted
### Are fixed wing aircraft with gimbal thrust feasible?
Could you create an aircraft using an jet engine/propeller/ducted fan on a gimbal? Sure you 'could'; it is feasible by the laws of physics. But just because you can does not mean that you should. ...
• 65.3k
### Is it possible for a ramjet to start from 0 velocity?
No, a ramjet can't be started at zero speed. Yes, there are missiles that use ramjet propulsion without any rotating parts: they use a rocket to accelerate the missile to high enough speed for the ...
• 14.3k
Accepted
### Why are the top speeds for jet engines higher than for propellers?
The thrust of a propeller is proportional to the inverse of airspeed, while the thrust of a pure turbojet is roughly constant over airspeed in the subsonic region. This means that two airplanes with ...
• 217k
### Is there any engine that doesn't use a propellant to produce thrust?
Yes there are. Although they are not for commercial aviation. One example are solar powered engines: Source Wikipedia Alternatively, there are other means of storing energy - rubber bands, but this ...
• 716
### Why wasn't a scramjet used for the Concorde?
A supersonic ram jet (scramjet) requires a fuel with a very high flame speed, so the combustion doesn't take place after the fuel-air mixture has left the engine. Aviation fuel would be completely ...
• 217k
Accepted
### How does turbojet thrust change with altitude?
The thrust variation with altitude would be highly engine specific, but the general trend is nicely depicted in the image below: Read this for further details.
• 6,764
### Why does a rocket engine increase power with speed if the burn rate is constant?
Put simply, the variation in power is due to the distinction between the exhaust jet power and mechanical power added to the vehicle. The power of the exhaust gas stream measured in the rocket-fixed ...
• 306
Accepted
### Can an airliner land safely using only propulsion control?
Some airliners have in fact landed more or less successfully using only engine power for control. The 2003 DHL attempted shootdown incident is an example. A major difference between United 232 and ...
• 2,418
Accepted
### What are the advantages of using a ramjet in an air-to-surface missile?
A rocket-powered missile carries both fuel and oxidiser and oxidiser is the heavier part. For example a gram of jet fuel needs almost 3.4 grams of oxygen to fully burn! Increasing the fuel load ...
• 54.1k
### Can a pulse jet be used on a light GA aircraft?
This question is pretty close. And you should read here as well as here Generally pulse jets are not great at low speeds (and can be hard to start when stationary). These are key aspects of small GA ...
• 95.9k
### How does turbojet thrust change with altitude?
Thrust is produced by accelerating air. The exhaust air leaves the engine nozzle at a fairly fixed velocity, so the acceleration is mainly controlled by the difference between the exhaust and incoming ...
• 171
### Why aren't thin, delta-shaped wings used for fighter aircraft?
Yes, it is. It was used on may 2nd generation designs already mentioned by habu, and it is returning now in the 4.5th generation, now with canards: Eurofighter Typhoon: JAS-39 Gripen: Dassault ...
• 54.1k
### Why wasn't a scramjet used for the Concorde?
The Skreemr at the present time is only an advertisement idea, so comparisons are not really meaningful. Nevertheless, the difference with the Concorde is the expected range of speeds: at Mach ~2 a ...
• 31.5k
### Would injecting water into a jet engine’s exhaust increase velocity?
Water injection in the exhaust could work if: The water is fully evaporated before it exits the engine exhaust. The fuel injected in the combustion chamber must also be fully evaporated for effective ...
• 57.7k
### Is it possible for a ramjet to start from 0 velocity?
No. A ramjet relies on stagnation pressure at the inlet in order to compress air prior to combustion. They are typically not effective until the speeds get quite high, somewhere in the Mach 2+ range. ...
• 65.3k
### How is mass flow rate computed?
The short answer is: No. One cannot calculate the inlet mass-flow based on the velocity alone. However in order to "know" the velocity ( $v$ ) you most likely would have measured everything you need (...
• 1,537
### Engines more powerful than the brakes?
Many small 4 to 6 seats propeller airplanes can overcome their brake/wheel holding capability, on concrete and ashpalt. My Cessna Cardinal is 180HP, I do engine pre-takeoff runup checks at 1700-1800 ...
• 8,689
### How does the nozzle diameter affect the thrust of a ducted propeller?
Welcome. I'm afraid your theory wasn't actually working. Reducing the duct's exit diameter led to an increase in its internal pressure, increasing load on the propeller, and likely even causing some ...
• 17.6k
Accepted
### How does the load transfer from a prop to the airframe?
The engine mount is an important component that transfers the prop thrust to the airframe. It's clearly visible in this photograph of a fast and powerful plane, a Bf109. Of course, bearings transfer ...
• 10.4k
Only top scored, non community-wiki answers of a minimum length are eligible |
## Sunday, September 27, 2020
### My Criteria for Reviewing Papers
Accept-or-reject decisions for the NeurIPS 2020 conference are out, with 9454 submissions and 1900 accepted papers (20% acceptance rate). Congratulations to everyone (regardless of acceptance decision) for their hard work in doing good research!
It's common knowledge among machine learning (ML) researchers that acceptance decisions at NeurIPS and other conferences are something of a weighted dice roll. In this silly theatre we call "Academic Publishing" -- a mostly disjoint concept from research by the way --, reviews are all over the place because each reviewer favors different things in ML papers. Here are some criteria that a reviewer might care about:
Correctness: This is the bare minimum for a scientific paper. Are the claims made in the paper scientifically correct? Did the authors take care not to train on the test set? If an algorithm was proposed, do the authors convincingly show that it works for the reasons they stated?
New Information: Your paper has to contribute new knowledge to the field. This can take the form of a new algorithm, or new experimental data, or even just a different way of explaining an existing concept. Even survey papers should contain some nuggets of new information, such as a holistic view unifying several independent works.
Proper Citations: a related work section that articulates connections to prior work and why your work is novel. Some reviewers will reject papers that don't tithe prior work adequately, or isn't sufficiently distinguished from it.
SOTA results: It's common to see reviewers demand that papers (1) propose a new algorithm and (2) achieve state-of-the-art (SOTA) on a benchmark.
More than "Just SOTA": No reviewer will penalize you for achieving SOTA, but some expect more than just beating the benchmark, such as one or more of the criteria in this list. Some reviewers go as far as to bash the "SOTA-chasing" culture of the field, which they deem to be "not very creative" and "incremental".
Simplicity: Many researchers profess to favor "simple ideas". However, the difference between "your simple idea" and "your trivial extension to someone else's simple idea" is not always so obvious.
Complexity: Some reviewers deem papers that don't present any new methods or fancy math proofs as "trivial" or "not rigorous".
Clarity & Understanding: Some reviewers care about the mechanistic details of proposed algorithms and furthering understanding of ML, not just achieving better results. This is closely related to "Correctness".
Is it "Exciting"?: Julian Togelius (AC for NeurIPS '20) mentions that many papers he chaired were simply not very exciting. Only Julian can know what he deems "exciting", but I suppose he means having "good taste" in choosing research problems and solutions.
Sufficiently Hard Problems: Some reviewers reject papers for evaluating on datasets that are too simple, like MNIST. "Sufficiently hard" is a moving goal post, with the implicit expectation that as the field develops better methods the benchmarks have to get harder to push unsolved capabilities. Also, SOTA methods on simple benchmarks are not always SOTA on harder benchmarks that are closer to real world applications. Thankfully my most cited paper was written at a time where it was still acceptable to publish on MNIST.
Is it Surprising? Even if a paper demonstrates successful results, a reviewer might claim that they are unsurprising or "obvious". For example, papers applying standard object recognition techniques to a novel dataset might be argued to be "too easy and straightforward" given that the field expects supervised object recognition to be mostly solved (this is not really true, but the benchmarks don't reflect that).
I really enjoy papers that defy intuitions, and I personally strive to write surprising papers.
Some of my favorite papers in this category do not achieve SOTA or propose any new algorithms at all:
Is it Real? Closely related to "sufficiently hard problems". Some reviewers think that games are a good testbed to study RL, while others (typically from the classical robotics community) think that Mujoco Ant and a real robotic quadruped are entirely different problems; algorithmic comparisons on the former tell us nothing about the same set of experiments on the latter.
Does Your Work Align with Good AI Ethics? Some view the development of ML technology as a means to build a better society, and discourage papers that don't align with their AI ethics. The required "Broader Impact" statements in NeurIPS submissions this year are an indication that the field is taking this much more seriously. For example, if you submit a paper that attempts to infer criminality from only facial features or perform autonomous weapon targeting, I think it's likely your paper will be rejected regardless of what methods you develop.
Different reviewers will prioritize different aspects of the above, and many of these criteria are highly subjective (e.g. problem taste, ethics, simplicity). For each of the criteria above, it's possible to come up with counterexamples of highly-cited or impactful ML papers that don't meet that criteria but possibly meet others.
## My Criteria
I wanted to share my criteria for how I review papers. When it comes to recommending accept/reject, I mostly care about Correctness and New Information. Even if I think your paper is boring and unlikely to be an actively researched topic in 10 years, I will vote to accept it as long as your paper helped me learn something new that I didn't think was already stated elsewhere.
Some more specific examples:
• If you make a claim about humanlike exploration capabilities in RL in your introduction and then propose an algorithm to do something like that, I'd like to see substantial empirical justification that the algorithm is indeed similar to what humans do.
• If your algorithm doesn't achieve SOTA, that's fine with me. But I would like to see a careful analysis of why your algorithm doesn't achieve it and why.
• When papers propose new algorithms, I prefer to see that the algorithm is better than prior work. However, I will still vote to accept if the paper presents a factually correct analysis of why it doesn't do better than prior work.
• If you claim that your new algorithm works better because of reason X, I would like to see experiments that show that it isn't because of alternate hypotheses X1, X2.
Correctness is difficult to verify. Many metric learning papers were proposed in the last 5 years and accepted at prestigious conferences, only for Musgrave et al. '20 to point out that the experimental methodology between these papers were not consistent.
I should get off my high horse and say that I'm part of the circus too. I've reviewed papers for 10+ conferences and workshops and I can honestly say that I only understood 25% of papers from just reading them. An author puts in tens or hundreds of hours into designing and crafting a research paper and the experimental methodology, and I only put in a few hours in deciding whether it is "correct science". Rarely am I able to approach a paper with the level of mastery needed to rigorously evaluate correctness.
A good question to constantly ask yourself is: "what experiment would convince me that the author's explanations are correct and not due to some alternate hypothesis? Did the authors check that hypothesis?"
I believe that we should accept all "adequate" papers, and more subjective things like "taste" and "simplicity" should be reserved for paper awards, spotlights, and oral presentations. I don't know if everyone should adopt this criteria, but I think it's helpful to at least be transparent as a reviewer on how I make accept/reject decisions.
If you're interested in getting mentorship for learning how to read, critique, and write papers better, I'd like to plug my weekly office hours, which I hold on Saturday mornings over Google Meet. I've been mentoring about 6 people regularly over the last 3 months and it's working out pretty well.
Anyone who is not in a traditional research background (not currently in an ML PhD program) can reach out to me to book an appointment. You can think of this like visiting your TA's office hours for help with your research work. Here are some of the services I can offer, completely pro bono:
• If you have trouble understanding a paper I can try to read it with you and offer my thoughts on it as if I were reviewing it.
• If you're very very new to the field and don't even know where to begin I can offer some starting exercises like reading / summarizing papers, re-producing existing papers, and so on.
• I can try to help you develop a good taste of what kinds of problems to work on, how to de-risk ambitious ideas, and so on.
• Advice on software engineering aspects of research. I've been coding for over 10 years; I've picked up some opinions on how to get things done quickly.
• Helping you craft a compelling story for a paper you want to write.
No experience is required, all that you need to bring to the table is a desire to become better at doing research. The acceptance rate for my office hours is literally 100% so don't be shy!
## Sunday, September 13, 2020
### Chaos and Randomness
For want of a nail the shoe was lost.
For want of a shoe the horse was lost.
For want of a horse the rider was lost.
For want of a rider the message was lost.
For want of a message the battle was lost.
For want of a battle the kingdom was lost.
And all for the want of a horseshoe nail.
Was the kingdom lost due to random chance? Or was it the inevitable outcome resulting from sensitive dependence on initial conditions? Does the difference even matter? Here is a blog post about Chaos and Randomness with Julia code.
## Preliminaries
Consider a real vector space $X$ and a function $f: X \to X$ on that space. If we repeatedly apply $f$ to a starting vector $x_1$, we get a sequence of vectors known as an orbit $x_1, x_2, ... ,f^n(x_1)$.
For example, the logistic map is defined as
function logistic_map(r, x)
r*x*(1-x)
end
Here is a plot of successive applications of the logistic map for r=3.5. We can see that the system constantly oscillates between two values, ~0.495 and ~0.812.
## Definition of Chaos
There is surprisingly no universally accepted mathematical definition of Chaos. For now we will present a commonly used characterization by Devaney:
We can describe an orbit $x_1, x_2, ... ,f^n(x_1)$ as *chaotic* if:
1. The orbit is not asymptotically periodic, meaning that it never starts repeating, nor does it approach an orbit that repeats (e.g. $a, b, c, a, b, c, a, b, c...$).
2. The maximum Lyapunov exponent $\lambda$ is greater than 0. This means that if you place another trajectory starting near this orbit, it will diverge at a rate $e^\lambda$. A positive $\lambda$ implies that two trajectories will diverge exponentially quickly away from each other. If $\lambda<0$, then the distance between trajectories would shrink exponentially quickly. This is the basic definition of "Sensitive Dependence to Initial Conditions (SDIC)", also colloquially understood as the "butterfly effect".
Note that (1) intuitively follows from (2), because the Lyapunov exponent of an orbit that approaches a periodic orbit would be $<0$, which contradicts the SDIC condition.
We can also define the map $f$ itself to be chaotic if there exists an invariant (trajectories cannot leave) subset $\tilde{X} \subset X$, where the following three conditions hold:
1. Sensitivity to Initial Conditions, as mentioned before.
2. Topological mixing (every point in orbits in $\tilde{X}$ approaches any other point in $\tilde{X}$).
3. Dense periodic orbits (every point in $\tilde{X}$ is arbitrarily close to a periodic orbit). At first, this is a bit of a head-scratcher given that we previously defined an orbit to be chaotic if it *didn't* approach a periodic orbit. The way to reconcile this is to think about the subspace $\tilde{X}$ being densely covered by periodic orbits, but they are all unstable so the chaotic orbits get bounced around $\tilde{X}$ for all eternity, never settling into an attractor but also unable to escape $\tilde{X}$.
Note that SDIC actually follows from the second two conditions. If these unstable periodic orbits cover the set $\tilde{X}$ densely and orbits also cover the set densely while not approaching the periodic ones, then intuitively the only way for this to happen is if all periodic orbits are unstable (SDIC).
These are by no means the only way to define chaos. The DynamicalSystems.jl package has an excellent documentation on several computationally tractable definitions of chaos.
## Chaos in the Logistic Family
Incidentally, the logistic map exhibits chaos for most of the values of r from values 3.56995 to 4.0. We can generate the bifurcation diagram quickly thanks to Julia's de-vectorized way of numeric programming.
rs = [2.8:0.01:3.3; 3.3:0.001:4.0]
x0s = 0.1:0.1:0.6
N = 2000 # orbit length
x = zeros(length(rs), length(x0s), N)
# for each starting condtion (across rows)
for k = 1:length(rs)
# initialize starting condition
x[k, :, 1] = x0s
for i = 1:length(x0s)
for j = 1:N-1
x[k, i, j+1] = logistic_map((r=rs[k] , x=x[k, i, j])...)
end
end
end
plot(rs, x[:, :, end], markersize=2, seriestype = :scatter, title = "Bifurcation Diagram (Logistic Map)")
We can see how starting values y1=0.1, y2=0.2, ...y6=0.6 all converge to the same value, oscillate between two values, then start to bifurcate repeatedly until chaos emerges as we increase r.
## Spatial Precision Error + Chaos = Randomness
What happens to our understanding of the dynamics of a chaotic system when we can only know the orbit values with some finite precision? For instance, x=0.76399 or x=0.7641 but we only observe x=0.764 in either case.
We can generate 1000 starting conditions that are identical up to our measurement precision, and observe the histogram of where the system ends up after n=1000 iterations of the logistic map.
Let's pretend this is a probabilistic system and ask the question: what are the conditional distributions of $p(x_n|x_0)$, where $n=1000$, for different levels of measurement precision?
At less than $O(10^{-8})$ precision, we start to observe the entropy of the state evolution rapidly increasing. Even though we know that the underlying dynamics are deterministic, measurement uncertainty (a form of aleotoric uncertainty) can expand exponentially quickly due to SDIC. This results in $p(x_n|x_0)$ appearing to be a complicated probability distribution, even generating "long tails".
I find it interesting that the "multi-modal, probabilistic" nature of $p(x_n|x_0)$ vanishes to a simple uni-modal distribution when measurement is sufficiently high to mitigate chaotic effects for $n=1000$. In machine learning we concern ourselves with learning fairly rich probability distributions, even going as far as to learn transformations of simple distributions into more complicated ones.
But what if we are being over-zealous with using powerful function approximators to model $p(x_n|x_0)$? For cases like the above, we are discarding the inductive bias that $p(x_n|x_0)$ arises from a simple source of noise (uniform measurement error) coupled with a chaotic "noise amplifier". Classical chaos on top of measurement error will indeed produce Indeterminism, but does that mean we can get away with treating $p(x_n|x_0)$ as purely random?
I suspect the apparent complexity of many "rich" probability distributions we encounter in the wild are more often than not just chaos+measurement error (e.g. weather). If so, how can we leverage that knowledge to build more useful statistical learning algorithms and draw inferences?
We already know that chaos and randomness are nearly equivalent from the perspective of computational distinguishability. Did you know that you can use chaos to send secret messages? This is done by having Alice and Bob synchronize a chaotic system $x$ with the same initial state $x_0$, and then Alice sends a message $0.001*signal + x$. Bob merely evolves the chaotic system $x$ on his own and subtracts it to recover the signal. Chaos has also been used to design pseudo-random number generators.
## Saturday, June 20, 2020
### Free Office Hours for Non-Traditional ML Researchers
This post was prompted by a tweet I saw from my colleague, Colin:
I'm currently a researcher at Google with a "non-traditional background", where non-traditional background means "someone who doesn't have a PhD". People usually get PhDs so they can get hired for jobs that require that credential. In the case of AI/ML, this might be to become a professor at a university, or land a research scientist position at a place like Google, or sometimes even both.
At Google it's possible to become a researcher without having a PhD, although it's not very easy. There are a two main paths [1]:
One path is to join an AI Residency Program, which are fixed-term jobs from non-university institution (FAANG companies, AI2, etc.) that aim to jump-start a research career in ML/AI. However, these residencies are usually just 1 year long and are not long enough to really "prove yourself" as a researcher.
Another path is to start as a software engineer (SWE) in an ML-focused team and build your colleagues' trust in your research abilities. This was the route I took: I joined Google in 2016 as a software engineer in the Google Brain Robotics team. Even though I was a SWE by title, it made sense to focus on the "most important problem", which was to think really hard about why the robots weren't doing what we wanted and train deep neural nets in an attempt to fix those problems. One research project led to another, and now I just do research + publications all the time.
As the ML/AI publishing field has grown exponentially in the last few years, it has gotten harder to break into research (see Colin's tweet). Top PhD programs like BAIR usually require students to have a publication at a top conference like ICML, ICLR, NeurIPS before they even apply. I'm pretty sure I would not have been accepted to any PhD programs if I were graduating from college today, and would have probably ended up taking a job offer in quantitative finance instead.
The uphill climb gets even steeper for aspiring researchers with non-traditional backgrounds; they are competing with no shortage of qualified PhD students. As Colin alludes to, it is also getting harder for internationals to work at American technology companies and learn from American schools, thanks to our administration's moronic leadership.
The supply-demand curves for ML/AI labor are getting quite distorted. On one hand, we have a tremendous global influx of people wanting to solve hard engineering problems and contribute to scientific knowledge and share it openly with the world. On the other hand, there seems to be a shortage of formal training:
1. A research mentor to learn the academic lingo and academic customs from, and more importantly, how to ask good questions and design experiments to answer them.
2. Company environments where software engineers are encouraged to take bold risks and lead their own research (and not just support researchers with infra).
Free Office Hours
I can't do much for (2) at the moment, but I can definitely help with (1). To that end, I'm offering free ML research mentorship to aspiring researchers from non-traditional backgrounds via email and video conferencing.
I'm most familiar with applied machine learning, robotics, and generative modeling, so I'm most qualified to offer technical advice in these areas. I have a bunch of tangential interests like quantitative finance, graphics, and neuroscience. Regardless of technical topic, I can help with academic writing and de-risking ambitious projects and choosing what problems to work on. I also want to broaden my horizons and learn more from you.
If you're interested in using this resource, send me an email at <myfirstname><mylastname><2004><at><g****.com>. In your email, include:
2. What you want to get out of advising
3. A cool research idea you have in a couple sentences
Some more details on how these office hours will work:
1. Book weekly or bi-weekly Google Meet [2] calls to check up on your work and ask questions, with 15 minute time slots scheduled via Google Calendar.
2. The point of these office hours is not to answer "how do I get a job at Google Research", but to fulfill an advisor-like role in lieu of a PhD program. If you are farther along your research career we can discuss career paths and opportunities a little bit, but mostly I just want to help people with (1).
3. I'm probably not going to write code or run experiments for you.
4. I don't want to be that PI that slaps their name on all of their student's work - most advice I give will be given freely with no strings attached. If I make a significant contribution to your work or spend > O(10) hours working with you towards a publishable result, I may request being a co-author on a publication.
5. I reserve the right to decline meetings if I feel that it is not a productive use of my time or if other priorities take hold.
6. I cannot tell you about unpublished work that I'm working on at Google or any Google-confidential information.
7. I'm not offering ML consultation for businesses, so your research work has to be unrelated to your job.
8. To re-iterate point number 2 once more, I'm less interested in giving career advice and more interested in teaching you how to design experiments, how to cite and write papers, and communicating research effectively.
What do I get out of this? First, I get to expand my network. Second, I can only personally run so many experiments by myself so this would help me grow my own research career. Third, I think the supply of mentorship opportunities offered by academia is currently not scalable, and this is a bit of an experiment on my part to see if we can do better. I'd like to give aspiring researchers similar opportunities that I had 4 years ago that allowed me to break into the field.
Footnotes
[1] Chris Olah has a great essay on some additional options and pros and cons of non-traditional education.
[2] Zoom complies with Chinese censorship requests, so as a statement of protest I avoid using Zoom when possible.
## Wednesday, April 1, 2020
### Three Questions that Keep Me Up at Night
A Google interview candidate recently asked me: "What are three big science questions that keep you up at night?" This was a great question because one's answer reveals so much about one's intellectual interests - here are mine:
Q1: Can we imitate "thinking" from only observing behavior?
Suppose you have a large fleet of autonomous vehicles with human operators driving them around diverse road conditions. We can observe the decisions made by the human, and attempt to use imitation learning algorithms to map robot observations to the steering decisions that the human would take.
However, we can't observe what the homunculus is thinking directly. Humans read road text and other signage to interpret what they should and should not do. Humans plan more carefully when doing tricky maneuvers (parallel parking). Humans feel rage and drowsiness and translate those feelings into behavior.
Let's suppose we have a large car fleet and our dataset is so massive and perpetually growing that we cannot train it faster than we are collecting new data. If we train a powerful black-box function approximator to learn the mapping from robot observation to human behavior [1], and we use active-learning techniques like DAgger to combat false negatives, will that be enough to acquire these latent information processing capabilities? Can the car learn to think like a human, and how much?
Inferring low-dimensional unobserved states from behavior is a well-studied technique in statistical modeling. In recent years, meta-reinforcement learning algorithms have increased the capability of agents to change their behavior in the presence of new information. However, no one has applied this principle to the scale and complexity of "human-level thinking and reasoning variables". If we use basic black-box function approximators (ConvNets, ResNets, Transformers, etc.), will it be enough? Or will it still fail even with a million lifetimes worth of driving data?
In other words, can simply predicting human behavior lead to a model that can learn to think like a human?
One cannot draw a hard line between "thinking" and "pattern matching", but loosely speaking I'd want to see such learned latent variables reflect basic deductive and inductive reasoning capabilities. For example, a logical proposition formulated as a steering problem: "Turn left if it is raining; right otherwise".
This could also be addressed via other high-data environments:
• Observing trader orders on markets and seeing if we can recover the trader's deductive reasoning and beliefs about the future. See if we can observe rational thought (if not rational behavior).
• Recovering intent and emotions and desire from social network activity.
Q2: What is the computationally cheapest "organic building block" of an Artificial Life simulation that could lead to human-level AGI?
Many AI researchers, myself included, believe that competitive survival of "living organisms" is the only true way to implement general intelligence.
If you lack some mental power like deductive reasoning, another agent might exploit the reality to its advantage to out-compete you for resources.
If you don't know how to grasp an object, you can't bring food to your mouth. Intelligence is not merely a byproduct of survival; I would even argue that it is Life and Death itself from which all semantic meaning we perceive in the world arises (the difference between a "stable grasp" and an "unstable grasp").
How does one realize an A-Life research agenda? It would be prohibitively expensive to implement large-scale evolution with real robots, because we don't know how to get robots to self-replicate as living organisms do. We could use synthetic biology technology, but we don't know how to write complex software for cells yet and even if we could, it would probably take billions of years for cells to evolve into big brains. A less messy compromise is to implement A-Life in silico and evolve thinking critters in there.
We'd want the simulation to be fast enough to simulate armies of critters. Warfare was a great driver of innovation. We also want the simulation to be rich and open-ended enough to allow for ecological niches and tradeoffs between mental and physical adaptations (a hand learning to grasp objects).
Therein lies the big question: if the goal is to replicate the billions of years of evolutionary progress leading up to where we are today, what are the basic pieces of the environment that would be just good enough?
• Chemistry? Cells? Ribosomes? I certainly hope not.
• How do nutrient cycles work? Resources need to be recycled from land to critters and back for there to be ecological change.
• Is the discovery of fire important for evolutionary progression of intelligence? If so, do we need to simulate heat?
• What about sound and acoustic waves?
• Is a rigid-body simulation of MuJoCo humanoids enough? Probably not, if articulated hands end up being crucial.
• Is Minecraft enough?
• Does the mental substrate need to be embodied in the environment and subject to the physical laws of the reality? Our brains certainly are, but it would be bad if we had to simulate neural networks in MuJoCo.
• Is conservation of energy important? If we are not careful, it can be possible through evolution for agents to harvest free energy from their environment.
In the short story Crystal Nights by Greg Egan, simulated "Crabs" are built up of organic blocks that they steal from other Crabs. Crabs "reproduce" by assembling a new crab out of parts, like LEGO. But the short story left me wanting for more implementation details...
Q3: Loschmidt's Paradox and What Gives Rise to Time?
I recently read The Order of Time by Carlo Rovelli and being a complete Physics newbie, finished the book feeling more confused and mystified than when I had started.
The second law of thermodynamics, $\Delta{S} > 0$, states that entropy increases with time. That is the only physical law that is requires time "flow" forwards; all other physical laws have Time-Symmetry: they hold even if time was flowing backwards. In other words, T-Symmetry in a physical system implies conservation of entropy.
Microscopic phenomena (laws of mechanics on position, acceleration, force, electric field, Maxwell's equations) exhibit T-Symmetry. Macroscopic phenomena (gases dispersing in a room, people going about their lives), on the other hand, are T-Asymmetric. It is perhaps an adaptation to macroscopic reality being T-Asymmetric that our conscious experience itself has evolved to become aware of time passing. Perhaps bacteria do not need to know about time...
But if macroscopic phenomena are comprised of nothing more than countless microscopic phenomena, where the heck does entropy really come from?
Upon further Googling, I learned that this question is known as Loschmidt's Paradox. One resolution that I'm partially satisfied with is to consider that if we take all microscopic collisions to be driven by QM, then there really is no such thing as "T-symmetric" interactions, and thus microscopic interactions are actually T-asymmetric. A lot of the math becomes simpler to analyze if we consider a single pair of particles obeying randomized dynamics (whereas in Statistical Mechanics we are only allowed to assume that about a population of particles).
Even if we accept that macroscopic time originates from a microscopic equivalent of entropy, this still begs the question of what the origin of microscopic entropy (time) is.
Unfortunately, many words in English do not help to divorce my subjective, casual understanding of time from a more precise, formal understanding. Whenever I think of microscopic phenomena somehow "causing" macroscopic phenomena or the cause of time (entropy) "increasing", my head gets thrown for a loop. So much T-asymmetry is baked into our language!
I'd love to know of resources to gain a complete understanding of what we know and don't know, and perhaps a new language to think about Causality from a physics perspective
If you have thoughts on these questions, or want to share your own big science questions that keep you up at night, let me know in the comments or on Twitter! #3sciencequestions |
# Torque help
1. Dec 7, 2003
### Drakon25th
Hello,
Alright, this is the problem:
Calculate the torque about the front support of a diving board exerted by a 70kg person 3.0m from that support
here's the picture:
________________person
|<-1.0m->|<--3.0m-->:
^ ^ ^
| | |
F1 F2 CG
Alright this is what i have done so far:
$$\sum T = 0$$
$$\sum T = m*g*d_F1-F2 - m_p *g*d_F2-person = 0$$
$$\sum T = m*g* 1.0m - 70.kg * g *3.0m = 0$$
$$m*g*1.0m = 70.kg*g*3.0m$$
$$m = (70.kg*g*3.0m)/(g*1.0m)$$
$$m = 210kg$$
I know the answer is supposed to be 2.1 x 10^3 m*N, but i don't know what to do after finding the mass on F1; can someone help me?
Last edited by a moderator: Dec 7, 2003
2. Dec 8, 2003
### ShawnD
force from gravity:
70 * 9.81 = 686.7N
torque:
686.7 * 3 = 2060.1Nm
3. Dec 8, 2003
### Drakon25th
ooh i see, thank you |
# The probability that a slightly perturbed numerical analysis problem is difficult
Burgisser, Peter and Cucker, Felipe and Lotz, Martin (2008) The probability that a slightly perturbed numerical analysis problem is difficult. Mathematics of Computation, 77. pp. 1559-1583. ISSN 1088-6842
We prove a general theorem providing smoothed analysis estimates for conic condition numbers of problems of numerical analysis. Our probability estimates depend only on geometric invariants of the corresponding sets of ill-posed inputs. Several applications to linear and polynomial equation solving show that the estimates obtained in this way are easy to derive and quite accurate. The main theorem is based on a volume estimate of $\varepsilon$-tubular neighborhoods around a real algebraic subvariety of a sphere, intersected with a spherical disk of radius $\sigma$. Besides $\varepsilon$ and $\sigma$, this bound depends only on the dimension of the sphere and on the degree of the defining equations. |
# Formulae for Catalan's constant.
Some years ago, someone had shown me the formula (1). I have searched for its origin and for a proof. I wasn't able to get true origin of this formula but I was able to find out an elementary proof for it.
Since then, I'm interested in different approaches to find more formulae as (1).
What other formulas similar to ($$1$$) are known?
Two days ago, reading the book of Lewin "Polylogarithms and Associated Functions" I was able to find out formula (2).
$$\displaystyle \dfrac{1}{3}C=\int_0^1 \dfrac{1}{x}\arctan\left(\dfrac{x(1-x)}{2-x}\right)dx\tag1$$
$$\displaystyle \dfrac{2}{5}C=\int_0^1 \dfrac{1}{x}\arctan\left(\dfrac{\sqrt{5}x(1-x)}{1+\sqrt{5}-\sqrt{5}x}\right)dx-\int_0^1 \dfrac{1}{x}\arctan\left(\dfrac{x(1-x)}{3+\sqrt{5}-x}\right)dx\tag2$$
$$C$$ being the Catalan's constant.
I have a proof for both of these formulae.
My approach relies on the following identity:
For all real $$x>1$$,
$$\displaystyle \int_0^1 \dfrac{1}{t} \arctan \left (\dfrac{t(1-t)}{\frac{x+1}{2}-t}\right) dt=\int_1^{\frac{\sqrt{x}+1}{\sqrt{x}-1}}\dfrac{\log(t)}{1+t^2}dt$$
• By using integration by parts they can be restated as integrals over $(0,1)$ of $\log(x)$ times a rational function. By using the residue theorem together with the fact that $\int_{0}^{1}\frac{\log(x)}{x-a}\,dx = \text{Li}_2\left(\frac{1}{a}\right)$ Catalan's constant should arise easily. – Jack D'Aurizio Jan 11 '16 at 18:38
• Please give an outline of the proof that you have, so we don't waste our time developing and typing it out. – Rory Daulton Jan 11 '16 at 18:57
• @Rory: my proofs are basic ones, no use of contour integration, change of variable and integration by parts are used only – FDP Jan 11 '16 at 19:12
• Ask Wikipedia . – user65203 Jan 11 '16 at 20:17
For all $x\in [0,1]$ and $\alpha>1$,
$\displaystyle \arctan\left(\dfrac{x(1-x)}{\tfrac{1+\alpha^2}{(1-\alpha)^2}-x}\right)=\arctan\left(\dfrac{x}{\tfrac{1+\alpha^2}{\alpha(\alpha-1)}+\tfrac{1}{\alpha}x}\right)+\arctan\left(\dfrac{x}{\tfrac{1+\alpha^2}{1-\alpha}+\alpha x}\right)$
For all $\alpha>1$, $\displaystyle J(\alpha)=\int_0^1\dfrac{1}{x}\arctan\left(\dfrac{x(1-x)}{\tfrac{1+\alpha^2}{(1+\alpha)^2}-x}\right)dx=\int_0^{\tfrac{\alpha-1}{\alpha+1}} \dfrac{\arctan x}{x\left(1-\tfrac{1}{\alpha}x\right)}dx-\int_0^{\tfrac{\alpha-1}{\alpha+1}} \dfrac{\arctan x}{x(1+\alpha x)}dx$
For $x \in ]0,1]$,
$\dfrac{1}{x\left(1-\tfrac{1}{\alpha}x\right)}-\dfrac{1}{x\left(1+\alpha x\right)}=\dfrac{1}{\alpha-x}+\dfrac{\alpha}{1+\alpha x}$
Thus, one obtains,
$\displaystyle J(\alpha)=\int_0^{\tfrac{\alpha-1}{\alpha+1}}\dfrac{\arctan x}{\alpha-x}dx+\int_0^{\tfrac{\alpha-1}{\alpha+1}}\dfrac{\alpha \arctan x}{1+\alpha x}dx$
$\displaystyle J(\alpha)=\Big[-\log(\alpha-x)\arctan x\Big]_0^{\tfrac{\alpha-1}{\alpha+1}}+\int_0^{\tfrac{\alpha-1}{\alpha+1}}\dfrac{\log(\alpha-x)}{1+x^2}dx+\Big[\log(1+\alpha x)\arctan x\Big]_0^{\tfrac{\alpha-1}{\alpha+1}}-\int_0^{\tfrac{\alpha-1}{\alpha+1}}\dfrac{\log(1+\alpha x)}{1+x^2}dx$
$\displaystyle J(\alpha)=\int_0^{\tfrac{\alpha-1}{\alpha+1}}\dfrac{\log\left(\tfrac{\alpha-x}{1+\alpha x}\right)}{1+x^2}dx$
Using change of variable $y=\dfrac{\alpha-x}{1+\alpha x}$ ,
$\displaystyle J(\alpha)=\int_1^{\alpha} \dfrac{\log x}{1+x^2}dx$
If $\alpha=\dfrac{\sqrt{x}+1}{\sqrt{x}-1}$, one obtains,
For all $x>1$, $\displaystyle \int_0^1 \dfrac{1}{t} \arctan \left (\dfrac{t(1-t)}{\tfrac{x+1}{2}-t}\right) dt=\int_1^{\tfrac{\sqrt{x}+1}{\sqrt{x}-1}}\dfrac{\log(t)}{1+t^2}dt$
when $x=3$, one obtains,
$\displaystyle \int_0^1 \dfrac{1}{t} \arctan \left (\dfrac{t(1-t)}{2-t}\right) dt=\int_1^{\tfrac{\sqrt{3}+1}{\sqrt{3}-1}}\dfrac{\log(t)}{1+t^2}dt=\int_1^{2+\sqrt{3}}\dfrac{\log(t)}{1+t^2}dt$
It's well known that:
$\displaystyle \int_1^{2+\sqrt{3}}\dfrac{\log(t)}{1+t^2}dt=\int_1^{2-\sqrt{3}}\dfrac{\log(t)}{1+t^2}dt=\dfrac{C}{3}$
(recall that, $\tan\left(\dfrac{\pi}{12}\right)=2-\sqrt{3}$ and see Integral: $\int_0^{\pi/12} \ln(\tan x)\,dx$ )
There is a large multitude of different representations of the Catalan constant. See the following links for some of them:
FDP,
This integral was detailed in a paper you wrote some years ago.
Anyway it’s an endless pleasure to see how you manage such complicated calculations.
I suggest we name this generalized integral « FDP integral »
With my reiterated well deserved warm congratulations.
fjaclot
• I still dont know where this integral comes from. I know only this integral was found out by a mathematician on an internet site that is not online anymore. – FDP Apr 3 at 17:46
A more natural proof. \begin{align} \beta&=\sqrt{3}-1\\ J&=\int_0^1 \frac{\arctan\left(\frac{x(1-x)}{2-x}\right)}{x}dx\\ &\overset{\text{IBP}}=\left[\arctan\left(\frac{x(1-x)}{2-x}\right)\ln x\right]_0^1-\int_0^1 \frac{(x^2-4x+2)\ln x}{(x^2+\beta x+2)(x^2-(\beta+2)x+2)}dx\\ &=-\int_0^1 \frac{(x^2-4x+2)\ln x}{(x^2+\beta x+2)(x^2-(\beta+2)x+2)}dx\\ &=\int_0^1 \frac{\beta\ln x}{2(x^2-(2+\beta) x+2)}-\int_0^1 \frac{(2+\beta)\ln x}{2(x^2+\beta x+2)}\\ &=\underbrace{\int_0^1 \frac{2\ln x}{\beta\left(\left(\frac{2x-2-\beta}{\beta}\right)^2+1\right)}dx}_{y=\frac{\beta}{2+\beta-2x}}-\underbrace{\int_0^1 \frac{2\ln x}{(2+\beta)\left(\left(\frac{2x+\beta}{2+\beta}\right)^2+1\right)}dx}_{y=\frac{2x+\beta}{2+\beta}}\\ &=-\int_{\frac{\beta}{\beta+2}}^1 \frac{\ln y}{1+y^2}dy\\ &\overset{y=\tan \theta}=-\int_{\frac{\pi}{12}}^{\frac{\pi}{4}}\ln\left(\tan \theta\right)d\theta\\ &=-\int_{0}^{\frac{\pi}{4}}\ln\left(\tan \theta\right)d\theta+\int_0^{\frac{\pi}{12}}\ln\left(\tan \theta\right)d\theta\\ &=\text{G}-\frac{2}{3}\text{G}\\ &=\boxed{\dfrac{1}{3}\text{G}} \end{align} NB: For the latter integral see Integral: $\int_0^{\pi/12} \ln(\tan x)\,dx$ |
# Thread: Write Algebraic Expression in x
1. ## Write Algebraic Expression in x
Kim invests \$22,000, some in stocks and the rest in bonds.
Let x = amount invested in stocks.
Write an algebraic expression in x for "the amount invested in bonds."
2. ...Why not have an attempt yourself?
\$22,000 is spent. x is spent on stocks. Everything that's left over is spent in bonds.
How much is spent in bonds?
3. Let B = everything that's left over is spent in bonds.
B - (x + 22,000).
Is this the correct expression?
4. You've overcomplicated this.
She starts with $22,000$. Then she spends $x$ on stocks.
At this point, she has $22,000 - x$ left.
If you let B equal the amount spent on Bonds, $B = 22,000 - x$
5. Thank you very much. |
Subscribe
Issue No.02 - April-June (2006 vol.28)
pp: 81-86
Tim Bergin , American University
ABSTRACT
Carl Hammer was a pioneer in many ways, but he was foremost an organizer and a tireless promoter of computing.
INDEX TERMS
Carl Hammer, Univac, professional associations
CITATION
Tim Bergin, "Biographies: Carl Hammer (1914-2004)", IEEE Annals of the History of Computing, vol.28, no. 2, pp. 81-86, April-June 2006, doi:10.1109/MAHC.2006.26
REFERENCES
1. See C. Hammer oral history interview by J.R. Baker, 15 Apr. 1983, CBI OH 56, Charles Babbage Inst. (CBI), Univ. of Minnesota, Minneapolis; and J.A.N. Lee, Computer Pioneers, IEEE CS Press, 1995, pp. 358–359. 2. In the early 1990s, I worked with Carl to prepare notes about his career. This reference is to those notes, p. 5; the page numbers are continuous throughout. 3. Notes, p. 10. 4. According to EH.net, $1,100 in 1941 would be worth$14,120 in 2006 dollars. 5. C. Hammer, oral history interview by J.R. Baker Ross, CBI OH 56, p. 27. 6. According to the New York State Education Department, Walter Hervey closed its doors in 1957. 7. Notes, p. 17. Carl's ability to "make the rounds" and meet people served him throughout his illustrious career 8. C. Hammer, oral history interview by J.R. Baker Ross, CBI OH 56, p. 12. In recognition of IBM's support, the laboratory was renamed The Thomas J. Watson Astronomical Computing Bureau on 6 February 1945 and was the location of numerous firsts in the computer field. 9. SIAM was incorporated as a nonprofit organization under the laws of the State of Delaware on 30 April 1952. 10. A-2 and A-3 were early UNIVAC languages for doing mathematical problems, and used a three-address coding structure. See: J.E. Sammet, Programming Languages: History and Fundamentals, Prentice-Hall, 1969, p. 753. 11. Notes, p. 23 12. See W.S. Humphrey, "MOBIDIC and Fieldata," Annals of the History of Computing, vol. 9, no. 2, 1987, pp. 137–182. 13. MOBIDIC never left the experimental stage. The system was used in Germany from 1957 to 1962 when the Fieldata project was cancelled due to a general reorganization of the US Army. 14. C. Hammer and H. Montvilla, Maximal Flow in Networks, A Survey, RCA, 31 Oct. 1960. 15. For example, C. Hammer, Fourier Analysis of EZ Aquilae Data," RCA, 31 Mar. 1961; C. Hammer, L.H. Byrne and V. Mitchell, Multivariate Quadratic Regression and Correlation," RCA, 6 Oct. 1961; and C. Hammer, L.H. Byrne, and B. Urban, Statistical System for MATS, RCA, 9 Nov. 1961. 16. Carl Hammer documented everything. One has only to look at the Carl Hammer Papers, 1950–1990 at CBI (CBI 3) to see how thorough he was. In CBI 56, Carl discusses his Roladex file in which he had 9,000 names, addresses, and telephone numbers. 17. C. Hammer, oral history interview by J.R. Baker Ross, CBI OH 56, p. 30. Carl used this diary to prepare his Notes. 18. D.G. Copeland et al, "Sabre: The Development of Information-Based Competence and Execution of Information-Based Competition," Annals, vol. 17, no. 3, 1995, pp. 30–57; and R.V. Head, "Getting Sabre Off the Ground," Annals, vol. 24, no. 4, 2002, pp. 32–39. 19. Interestingly, although this area was new to Hammer, on 18 May 1965, he gave an invited address to the New York Chapter of the IEEE on the "somewhat esoteric subject of Traffic Control." 20. Worth \$234 million in 2004 dollars (http://eh.net/hmitcompare/). For comparison purposes, the Sabre system was fully operational in 1964; two other IBM-based systems, Deltamatic and Panamac were operational in 1965. See D.G. Copeland and J.L. McKenney, "Airline Reservation Systems: Lessons from History," MIS Quarterly, vol. 12, no. 3, 1988, p. 356. 21. According to D.G. Copeland et al. (ref. 18), the United and Eastern reservation systems failed because they lacked experience with the application and the technology; see "Sabre: The Development of Information-Based Competence and Execution of Information-Based Competition," Annals, p. 40; D.G. Copeland and J.L. McKenney, "Airline Reservation Systems: Lessons from History," MIS Quarterly, pp. 356–357 (ref. 20). Hammer also refers to this project as a "fiasco" as being resolved in both (UAL and Univac) parties' favor. See Notes, p. 55. 22. Notes, p. 19. 23. Notes, p. 35. 24. Notes, p. 95. 25. Eugene McCarthy (D-Minn.) challenged incumbent Lyndon Johnson for the 1968 Democratic nomination. When Johnson announced he would not run for reelection, the Democratic party selected Vice President Hubert Humphrey (a former senator from Minnesota). 26. Annals, vol. 4, no. 3, 1982, p. 248. 27. Annals, vol. 4, no. 3, 1982, pp. 245–256, contains "Mauchly: Unpublished Remarks," which contains the transcript of the Fireside Chat and an introduction by Carl Hammer. 28. H. Donlan, "At the Top," Inside DPMA, vol. 29, no. 9, 1991, p. 5. 29. Notes, p. 174. 30. Indeed, shortly after I attended his seminar at American University, Carl enlisted me to talk to one of the local DPMA chapters on whatever I was working on at the time. This was repeated over the years with new topics. 31. AFIPS was a federation of societies formed to provide a mechanism for US participation in the International Federation of Information Processing Societies. Unlike the constituent societies, which were made up of individual members (who shared a common professional interest), AFIPS was a "society of societies." For more about AFIPS, see: "AFIPS 1961–1986," a special issue of Annals, vol. 8, no. 3, 1986, pp. 211–310. 32. Notes, p. 101. 33. V.C. Walker, "Meetings in Retrospect," Annals, vol. 3, no. 4, 1981, pp. 400–407. 34. The Annals of the History of Computing was the first periodical to be published by AFIPS. In the first issue, in July 1979, Aaron Finerman, Chairman of the AFIPS Publications Committee acknowledges the efforts of the Annals Organizing Committee, the Editorial Board, the AFIPS History of Computing Committee (HOCC) and its present and past chairs, Jean Sammet and William Leubbert. Sammet provided an overview of "General AFIPS History of Computing Activities" on pp. 6–8. 35. Annals, "Notices," vol. 7, no. 4, 1985, p. 362. 36. C. Hammer, "Computer Communications: The Future," Computer Communications: Impacts and Implications—The 1st Int'l Conf. Computer Communication, S. Winkler, ed., ACM Press, 1972, pp. 31–35. 37. Carl served as one of the governors of the ICCC and remained active in this movement through his retirement in 1981. |
# Start a new discussion
## Not signed in
Want to take part in these discussions? Sign in if you have an account, or apply for one below
## Site Tag Cloud
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
• CommentRowNumber1.
• CommentAuthorUrs
• CommentTimeApr 13th 2018
added also the complementary cartoon for D-branes in string perturbation theory (the usual picture)
• CommentRowNumber2.
• CommentAuthorUrs
• CommentTimeApr 13th 2018
complementary, that is, to the cartoon for the black D-branes which I had added a few days back. That must have been in the brief period where the announcement mechanism was not working.
• CommentRowNumber3.
• CommentAuthorUrs
• CommentTimeSep 29th 2018
• (edited Sep 29th 2018)
But actual checks of the proposal that D-brane charge is given by K-theory, via concrete computation in boundary conformal field theory, have revealed some subtleties:
• Stefan Fredenhagen, Thomas Quella, Generalised permutation branes, JHEP0511:004, 2005 (arXiv:hep-th/0509153)
It might surprise that despite all the progress that has been made in understanding branes on group manifolds, there are usually not enough D-branes known to explain the whole charge group predicted by (twisted) K-theory.
• CommentRowNumber4.
• CommentAuthorUrs
• CommentTimeDec 1st 2018
finally added the original article
although it appears that we have modified the type II theory by adding something new to it, we are now arguing that these objects are actually intrinsic to any nonperturbative formulation of the type II theory; presumably one should think of them as an alternate representation of the black $p$-branes
as well as more review texts, such as
The ambition of that book is really nice. Sad that the publisher didn’t bother to have a native English speaker look over it even briefly. |
# Proof of linear independence
How would I be able to prove this question:
Let $x$ and $y$ be linearly independent elements of a vector space $V$. Show that $u = ax+by$ and $v=cx+dy$ are linearly independent if and only if $ad-bc$ does not equal $0$.
I know that $u$ and $v$ are linear combinations of $V$ which will make them span $V$. Also, if the determinant is equal to $0$ then it will be a singular matrix and if it is singular then it will have free variables which will make it dependent, but how can I show this mathematically?
-
Why do you say that $u$ and $v$ span $V$? $V$ could be $100$-dimensional. Step 1 of learning how to write mathematics is being careful with the meanings of your words. – Erick Wong Sep 22 '12 at 23:15
Opps, you are right. I am sorry for that! Thank you for clarifying. – Q.matin Sep 22 '12 at 23:23
You don’t have any reason to think that $x$ and $y$ span $V$: all you know is that they are linearly independent. The dimension of $V$ might well be greater than $2$, in which case no two-element subset of $V$ will span $V$, though many will be linearly independent.
You have two things to show:
1. If $ad-bc\ne0$, then $u$ and $v$ are linearly independent.
2. If $u$ and $v$ are linearly independent, then $ad-bc\ne0$.
It’s probably easiest to prove (1) by proving the contrapositive: if $u$ and $v$ are linearly dependent, then $ad-bc=0$. That’s because the assumption of linear dependence gives you something very concrete to work with: if $u$ and $v$ are linearly dependent, there are scalars $\alpha$ and $\beta$, at least one of which is non-zero, such that $\alpha u+\beta v=0$. Now write this out in terms of $x$ and $y$:
$$\alpha(ax+by)+\beta(cx+dy)=0\;.$$
Collect the $x$ and $y$ terms on the lefthand side:
$$(\alpha a+\beta c)x+(\alpha b+\beta d)y=0\;.$$
By hypothesis $x$ and $y$ are linearly independent, so
\left\{\begin{align*} &\alpha a+\beta c=0\\ &\alpha b+\beta d=0\;. \end{align*}\right.
This says that $\begin{bmatrix}\alpha\\\beta\end{bmatrix}$ is a non-zero solution to the equation
$$\begin{bmatrix}a&b\\c&d\end{bmatrix}z=0\;.$$
what does that tell you about $\det\begin{bmatrix}a&b\\c&d\end{bmatrix}$ and hence about $ad-bc$?
To prove (2), again go for the contrapositive: if $ad-bc=0$, then $u$ and $v$ are linearly dependent. You can pretty much just reverse the reasoning in the argument that I outlined for (1).
-
Beautiful, thank you very much. I understand now thanks to your help!!! – Q.matin Sep 22 '12 at 23:34
For one direction, suppose $ad-bc \ne 0$, and suppose $\lambda, \mu \in \mathbb{R}$ satisfy $$\lambda u + \mu v = 0$$ Show that this implies that $\lambda = \mu = 0$, and hence that $u$ and $v$ are linearly independent.
For the other direction, suppose $ad-bc = 0$ and find nonzero $\lambda, \mu$ satisfying the above equation.
Post in the comments if you need more help.
-
I am going to replace your lambda and mu with i and j respectively because I don't know how to write that here. So, I got iu+jv=0 which implies that iu=-jv then if we replace iu we get that -jv+jv=0 and if we factor out j we get that j(v-v)=0 thus j(0)=0 -> j=0. Is that right? – Q.matin Sep 22 '12 at 23:27
Absolutely not; $\mu \cdot 0 = 0$ for any $\mu$ so it certainly doesn't imply $\mu = 0$. You need to write $u$ and $v$ in the above equation in terms of $x$ and $y$, and collect terms. Then use the raw definition of linear independence of $x$ and $y$. – Clive Newstead Sep 22 '12 at 23:29
I am going to go out for a jog to clear my head because that is a stupid mistake I made. Brian above provided me a detailed instruction so I understand now. But still thank you a lot Clive! – Q.matin Sep 22 '12 at 23:36 |
# Book:Walter Ledermann/Introduction to the Theory of Finite Groups
## Walter Ledermann: Introduction to the Theory of Finite Groups
Published $1949$. |
파라미터에 종속적인 리아푸노프 함수 기법에 의한 불확실 시간지연 시스템을 위한 강인한 $L_2-L_{\infty}$ 필터 설계
• Choi, Hyoun-Chul (ASRI, School of Electrical Engineering and Computer Science, Seoul National University) ;
• Jung, Jin-Woo (ASRI, School of Electrical Engineering and Computer Science, Seoul National University) ;
• Shim, Hyung-Bo (ASRI, School of Electrical Engineering and Computer Science, Seoul National University) ;
• Seo, Jin-H. (ASRI, School of Electrical Engineering and Computer Science, Seoul National University)
• 최현철 (서울대학교 전기.컴퓨터 공학부) ;
• 정진우 (서울대학교 전기.컴퓨터 공학부) ;
• 심형보 (서울대학교 전기.컴퓨터 공학부) ;
• 서진헌 (서울대학교 전기.컴퓨터 공학부)
• Published : 2008.10.31
Abstract
An LMI-based method for robust $L_2-L_{\infty}$ filter design is proposed for poly topic uncertain time-delay systems. By using the Projection Lemma and a suitable linearizing transformation, a strict LMI condition for $L_2-L_{\infty}$ filter design is obtained, which does not involve any iterations for design-parameter search, any couplings between the Lyapunov and system matrices, nor any system-dependent filter parameterization. Therefore, the proposed condition enables one to easily adopt, with help of efficient numerical solvers, a parameter-dependent Lyapunov function approach for reducing conservatism, and to design both robust and parameter-dependent filters for uncertain and parameter-dependent time-delay systems, respectively. |
# What is Inflation
• General increase in prices, or more precise, the purchasing power of your money decreases
• Inflations isn't when only one particular product's price increases but when all prices increases
• In the late 1970s, Fed Chairman Paul Volcker made taming inflation his top priority
• By increasing interest rates, the Fed was able to tame inflation but, in the process, essentially created a recession
# Costs of Inflation
• Shoe-Leather Costs
• The increased costs of transactions due to inflation
• Since people avoid holding onto money during periods of inflation, people waste time and energy marking transactions to avoid sitting on cash.
• Banking sector increases
• Real costs of changing listed prices
• In hyperinflation, countries will avoid changing prices.
• Unit-of-Account Costs
• Money becomes a less reliable unit of measurement
• "Profit" due to inflation is still taxed and therefore investment is discouraged
• This role of the dollar as a basis for contracts and calculation is called the unit-of-
accountrole of money.
# Winners & Losers from Inflation
• Nominal interest rate
• the actual interest that is paid on a loan
• Real interest rate
• Nominal interest rate - Expected inflation rate
• Nominal vs. Real
• The nominal interest rate is the rate actually paid.
• The real interest rate is actual return the lender receives net of inflation
• Borrowers win with inflation because they pay back in nominal dollars.
• Savers and lenders lose because the amount of money they receive is worth less.
• Countries with uncertain levels of inflation generally won't issue long-term loans
# Wage-Price Spiral
• Combination of "cost-push" and "demand-pull" inflation leads to a wage-price spiral
• When there is too much money chasing too few goods, the price of products will tend to increase which leads to "demand-pull" inflation
• When workers demand higher wages as a result of inflated prices, prices of products consequently go up as well, leading to this "wage-price" spiral
• Increased price of products leads to higher wages leads to increased price of products and so on
• Keynesians tend to favor this model of how inflation works and that they prices are sticky downward or downward inflexible
# Monetarist View of Inflation
• Milton Friedman viewed inflation as simply an issue of money supply
• The quantity theory of money is quite simple: an increase in the supply of money will correspondingly increase inflation
• The Austrian view argues that using the Consumer Price Index (or CPI) to measure inflation is inaccurate because inflation in unevenly spread through different goods and services
• Paul Krugman, a Nobel Prize winning, "Keynesian" economist, rejects this Austrian view of inflation stating that the monetary base tripled in 2011 and yet there was no widespread inflation
# Measurement and Calculation of Inflation
• Aggregate Price Level
• Measure of the overall prices in the economy
• Hypothetical set of consumer purchases of goods and services
• Price Index
• Measures the cost of purchasing a given market basket in a given year
• (index value is set to 100 in the base year)
• $\text{Price index in a given year} = \dfrac{\text{Cost of market basket in a given year}}{\text{Cost of market basket in base year}} \times 100$
• $\text{Inflation rate} = \dfrac{\text{Price index in year 2} - \text{Price index in year 1}}{\text{ Price index in year 1 }} \times 100$
# CPI, PPI & GDP Deflator
• Consumer Price Index (CPI)
• most commonly used measure of inflation, market basket of a typical urban American family
• The Bureau of Labor Statistics sends employees out to survey prices on a multitude of items in food, apparel, recreation, medical care, transportation and other categories
• CPI tends to overstate inflation (substitution bias and technological advances)
• Producer Price Index (PPI)
• measures the cost of typical basket of goods and services that producers purchase
• Tends to be used as the "early warning signal" of changes in the inflation rate
• GDP Deflator
• $\text{GDP Deflator} = \dfrac{\text{Nominal GDP}}{\text{Real GDP}} \times 100$
• $\text{Real GDP} = \dfrac{\text{Nominal GDP}}{\text{GDP Deflator}} \times 100$
• Not exactly a price index but serves to show how much the aggregate price level has increased
• Unlike the CPI, GDP is not based on a fixed basket of goods and services.
• It's allowed to change with people's consumption and investment patterns
• The default "basket" in each year is the set of all goods that were produced in the country in that particular year.
• CPI, PPI, and GDP Deflator tend to move, generally, in the same direction
• Comparison
• equation
• prices of capital good
• included in GPD deflator (if produced domestically)
• excluded from CPI
• prices of imported consumer goods
• included in CPI
• excluded from GDP deflator |
# a transformation which takes a $xz$ plane to a plane parallel to it
Find a linear transformation $T: \mathbb{R^3} \to \mathbb{R^3}$ which takes an $XZ$ plane (i.e $X=0$) to a plane parallel to it (i.e $X=1$).
I thought along these lines:
Let $T$ be such a linear transformation. Then I assigned $T(0,0,1)=(1,0,1)$, $T(0,1,0)=(1,1,0)$,$T(1,0,0)=(1,0,-1)$
But the problem with it is that for any linear combination of $(0,1,0)$ and $(0,0,1)$, the transformation should yield something like $(1,y_1,z_1)$.
Thanks for the help!!
There is no linear map like this. What you want is an affine map. Many people say "linear" when they mean "affine," but it can get confusing when the difference is important.
Definitions
Linear maps preserve scalar multiplication and vector addition. $L$ is linear if $$L(x+u,y+v,z+w)=L(x,y,z)+L(u,v,w)$$ and $$L(ax,ay,az)=aL(x,y,z).$$
An affine map is linear plus a constant. $T$ is affine if $$T(x,y,z)=L(x,y,z)+v$$ where $L$ is some linear map and $v$ is a constant vector.
I would normally call the plane where $x=0$ the $YZ$- rather than $XZ$-plane.
An Affine Solution
Let $T(x,y,z)=(1,y,z)$. In matrix form, $$T(x,y,z)= \begin{bmatrix} 0 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x\\ y\\ z \end{bmatrix} + \begin{bmatrix} 1\\ 0\\ 0 \end{bmatrix}$$
It is very easy to show that $T$ is affine, and the plane where $x=0$ (and in fact all of $\mathbb{R}^3$) is mapped to the plane where $x=1$.
There is no such linear map
You can easily prove there is no such linear map: suppose $T$ is linear and there are two points in the $YZ$-plane--call them $(0,y_1,z_1)$ and $(0,y_2,z_2)$--that get mapped to the $x=1$ plane--let's say $T(0,y_1,z_1)=(1,u_1,v_1)$ and $T(0,y_2,z_2)=(1,u_2,v_2)$. Then \begin{align}T(0,y_1+y_2,z_1+z_2) =&T(0,y_1,z_1)+T(0,y_2,z_2)\\ =&(1,u_1,v_1)+(1,u_2,v_2)\\ =&(2,u_1+u_2,v_1+v_2) \end{align} which is not in the $x=1$ plane. So the entire $YZ$-plane does not get mapped to the $x=1$ plane by any linear map. |
# Time derivative of expectation value of position
It is necessary to distinguish between the position, operator of position, and mean value of position/average position. Here one works in Schrödinger representation, which means that all the time dependence is carried by the wave function, whereas the operators are time-independent. Moreover, in the position representation the operator of position is $$\hat{x}=x$$ - a time-independent number that should be integrated with the wave function.
In other words: the average position $$\langle x\rangle$$ is time-dependent, but its operator $$x$$ is time-independent.
You may also want to consult this answer.
An analogy might be useful.
Suppose you want to compute the time-dependence of the average weight of a population. The average weight is just \begin{align} \langle w\rangle = \int dw w N(w) \tag{1} \end{align} where $$N(w)$$ is the probability of people having weight $$w$$. Now, what changes with time is not the weight $$w$$: $$1$$kg today is the same as $$1$$kg tomorrow, but what changes in time is the probability $$N(w)$$ of having persons of a certain weight: some people will gain weight over time, some will loose weight so a better expression for the average time would be \begin{align} \langle w(t)\rangle = \int dw w N(w,t) \end{align} and of course the rate of change in this average is \begin{align} \frac{d\langle w(t)\rangle}{dt}= \int dw w \frac{N(w,t)}{dt}\tag{2} \end{align} Thus, in (2), what changes is the probability distribution. This $$N(w,t)$$ is in fact nothing but the probability distribution $$\vert \psi(x,t)\vert^2$$ in your problem.
$$x$$ is just a position variable or operator, if you prefer. It is not the position of the particle, which instead is $$\langle x\rangle = \int dx\, x \left|\Psi\right|^2~.$$ $$\langle x\rangle$$ may depend on $$t$$, but $$x$$ does not depend on $$t$$. |
# All Questions
149 views
### How to replace $$..$$ by $…$ [duplicate]
Possible Duplicate: replace often used Tex-Literals ($and$\$) for math regions into $$or$$ and $or$ When I started using latex, I wrote mathematical formulae separated from continuous ...
7k views
### Fancy arrows with TikZ
I would like to draw an arrow like this using TikZ: This is taken from Workflow diagram. I didn't succeed in reproducing the arrow; is somewhat beyond my (extremely limited) TikZ abilities. I don't ...
211 views
### Aligning three small box together with a text
I made through creating the boxes inside a box with help of an expert by this link Is three boxes inside a box posibble in latex Now im being troubled on how will i put into command to align this ...
483 views
### How to fill area between curve defined by series of control points and x-axis
How to fill area between curve defined by series of control points and x-axis? MWE: \documentclass{article} \usepackage{tikz} \usetikzlibrary{decorations.markings} \usetikzlibrary{intersections} ...
403 views
### Differing left and right axis labels in psplot
This question is related to psgraph with y-axis on left and right of graph and axis labels but can stand alone, as it is really about loops and variables. I'm really sad that I have to ask this ...
373 views
### Reducing output garbage [duplicate]
Possible Duplicate: Reducing the console output of LaTeX running: C:\Program Files\Text\MiKTeX 2.9\miktex\bin\x64\lualatex.exe -interaction=batchmodesrc-specials main.tex lualatex.exe> ...
4k views
### Hyperlink each bib entry to its DOI page
Can I have each bib entry hyper-linked to its DOI page? i.e. clicking anywhere on the bib entry (preferably anywhere, but if not possible, at least on the title of the paper) in the PDF should open ...
32 views
### How to unset a toggle in biblatex? [duplicate]
Possible Duplicate: Biblatex: Get rid of ISSN, URLs and DOIs in references The standard.bbx file sets toggles for doi,url and eprint that are being checked later by the doi+eprint+url macro. My ...
238 views
### Missing bond type in chemical rendering packages?
I've read all the questions regarding chemical structure rendering that I could find on this website. I've decided to use chemfig to render the structures. One certain type of bond, illustrated below,...
815 views
### Correctly align vertical text on a baseline in pgfplots
I'm proud to announce that I have a large number of PGFplots in my document. But ... I have a small problem with the vertical alignment of text throughout my plots. What's the easiest way to align ...
213 views
### Chapter page formatting
I'm using MiKTeX and when I use \chapter command, it starts a new page. I was wondering how to not start a new page. My next concern is how can I align the text \chapter{*} where * is the text. ...
1k views
### Why do some biblatex options need to be set in the preamble and cannot be called from a style file using \ExecuteBibliographyOptions?
I personally prefer to set biblatex's package options in a .bbx file using \ExecuteBibliographyOptions so that I can just call biblatex minimally in my preamble (preferably using only \usepackage[...
14k views
### Installing new style packages on a Mac [duplicate]
Possible Duplicate: How do I add a .sty file to my MacTeX/TeXShop installation? I am using a mac and want to install the acm style. where do i add the downloaded packages so that tex can find ...
354 views
### Expanding line height when equations have displaystyle fractions
In my document, I have got a lot of equations in inline mode, like: \documentclass{article} \begin{document} Eu sit tincidunt incorrupte definitionem, vis mutat affert percipit cu, eirmod ...
10k views
### Unbold subsubsection title
I'm rather new to LaTeX, I am trying to get my article class document set up so that the section font is 16 pt and bold, the subsection is 14 pt and bold and the subsubsection is 14 pt and not bold. I ...
253 views
### inserting code snippet into table
In my current latex report, I have to insert code snippets into Latex table. I have tried with listing but, listing is not working. I have written following latex code using verbatim. But, I want ...
46 views
### Save and load TikZ graphs [duplicate]
Possible Duplicate: Outsourcing TikZ code Pardon the question, I'm an undergrad using TikZ and LaTeX to write-up my final term report. Is there a way to write the code for a tikzpicture in ...
227 views
### Circling/framing and referring to a bunch of nodes
The end goal: drawing paths between nodes like in the example there but the difference is the nodes are inside a picture here and in particular, the other node is a bunch of nodes in the same picture. ...
392 views
### Translate tikz arrow tips to pstricks
I have a document which contains tikz pictures and pstricks pictures which both contain lots of arrows. Usually I use tikz to do my own stuff but I got the pstricks pictures from another source and ...
32k views
### Bibliographies from multiple .bib files
I would like to use multiple .bib files for my article, such as file1.bib, file2.bib. However, I would like my references not to be separated (such as "Primary sources", "Secondary sources", etc.) ...
218 views
### Using continuous curve without points
is it possible to use a curve without specific point to represent the data, the reason is that the data has too many points to display correctly. This is what I have (It looks all messed up) This is ...
829 views
### Move legend closer
I wish to move the legend for the left chart closer to the graph, and for the right graph, I wish to let the x-axis and y-axis appeared just like the left graph (so that there is some extra spaces to ...
667 views
### How do I rescale cropped material with pdfcrop
I'm running pdfcrop (version 1.31) on Mac OSX. From what I read on pdfcrop page I should be able to rescale the size of the final document. The suggested approach is pdfcrop infile.pdf letter ...
4k views
### Using form-only patterns with variable: possible TikZ bug
While trying to use a variable pattern in TikZ with varying pattern color, I found that it did not seem to work. Here's a minimal working example: \documentclass[tikz]{standalone} \usetikzlibrary{...
123 views
### Datatool DTLifnull not working
Considering the following database: Name,ID,Gender,Years in Service ,382473856,M,15 Francesca Joestar,461276432,F,10 Chan Ker Mei,463724631,F,5 Hikaru Yagami,154954739,M,10 The first line has the ...
254 views
### Why can I not vertically align the text in the 3rd table column?
I'm trying to make a table which lists some math functions. I've been searching the net on how to center the equations in the cells both vertically and horizontally. And I think I've found an answer ...
495 views
### Auto-generating tables using etoolbox
I have a particular problem which I am trying to solve using etoolbox, but I can't figure out how. What I'm trying to do is to auto-generate a table based on information from a previous table. This is ...
569 views
### minipage is wider than I wanted
Occasionally, the width of a minipage environment seems to be ignored. I've not kept accurate records of when this has happened. But below is purely illustrative code that shows the issue. \...
643 views
### Are three boxes inside a box possible?
I am making a yes or no question inside a box and I have difficulty on creating the command for it. Would you mind to help me make something that will output this:
412 views
### How to create command for a box where text inside the box can automatically adjust?
I have a problem on how I can make my command right. I created a box where the text will be inside it when used but something went wrong when I prolong the text it turn out like this: Code: \...
4k views
### Space between bars, height and width of the plot
I'm using pgfplots to generate bar plots from data files. Here's MWE: \documentclass{article} \usepackage[ngerman]{babel} \usepackage{pgfplots} \begin{document} \begin{tikzpicture} \begin{axis}[ ...
138 views
### How to find out the right citation style?
I am trying to find out which citation style fits my demands best for my bibliography. As I am writing for a german science institution and using many german sources, I intented using jurabib My ...
240 views
### Unwanted symbol ă in tabular environment
I am trying to create a table using the tabular environment using this code snippet: \documentclass{article} \begin{document} \begin{table} \centering \begin{tabular}{|l|l|} ...
8k views
### How can I make a bold horizontal rule under each section title?
I want to place a long bold \hrule-type line under the \section to make it more distinctive. How can I do that?
384 views
### Can I avoid page numbers in a report document with openright?
I have a report class and I dont want to switch it. If I use the openright option, I have the problem that empty pages are labled. Before taking the "openright" option out and inserting empty pages by ...
469 views
### Problem with rotation of pattern in tikz [closed]
Consider the following example. Why does the pattern not rotate correctly both for the tikz rotate and for the rotatebox approach. How can I fix it? \documentclass{article} \usepackage{tikz} \...
639 views
### Command for automatically inserting wordcount in document?
I am using LyX, and I was wondering if there is any ERT Command for automatically inserting wordcount in a document? I am on Windows.
635 views
### Longtable continues into footer
I am using the longtable package to span tables across multiple pages. However sometimes (to me randomly, however so far only i quite complex documents) the longtable doesn't break like intended, but ...
53 views
### Modifying biblatex @inbook bibliography, remove Ed. by and change ordering [duplicate]
Possible Duplicate: Biblatex, JEA style I'm using bib latex for managing my bibliography, I did some adjustments and so far I'm quite pleased with it. When I add an @inbook source to my bib-...
7k views
### Why does \cal{M}_n give “M>”?
Why does \cal{M}_n give “M>” in parts of my LaTeX file and works correctly in other parts?
591 views
### Insert Text between “List of XXX” Heading and Actual Listing
I would like to insert a small paragraph of text between the "List of Figures" heading and the actual list of figures, and also between "List of Listings" and the actual list of listings. I am using ...
2k views
### How to align nested cases?
Please consider the code bellow: \begin{align*} function = \begin{cases} case1 &\mbox{if } n = 0 \\ \begin{cases} case2 &\mbox{if } n = 1 \\ ...
1k views
### Adjusting features of a pgf plot
I have the pgf plots below, where it doesn't appear well. I wish to put closer the "Solving Time(ms)" and y axis, put the font size smaller how should i do it? There is also a problem for ...
89 views
### How to change the order in natbib´s short citation?
I´m using natbib with the apalike style. When using the short citation \citet[p. 22]{Testauthor98} it shows me Testauthor (1998, p. 22) What do i have to do/change in order to get the ...
1k views
### “Dimension too large” for pgf plots
I have code below which complaining "Dimension too large", what is the problem? And I want to zoom in the 99.8-100 rather than using log scale for y axis, how should I do it? \documentclass{article} %...
733 views
### Making an edit to an inline equation that went off of the screen
I'm using LyX, and I want to make an edit to an equation. Since the inline equation is really long, it continues off of the screen. I'm literally in the dark when I try to edit my work since I cannot ...
906 views
### How to align right this long table?
I have a longtable. This is my code \documentclass[12pt,a4paper]{article} \usepackage{amsmath,amsxtra,amssymb,latexsym, amscd,amsthm} \usepackage{indentfirst} \usepackage[mathscr]{eucal} \usepackage{...
103 views
### skipbib without BibLaTeX
Is there any way to use skipbib or equivalent in regular BibTeX with natbib and without using BibLaTeX? |
188 views
The number of divisors of $6000$, where $1$ and $6000$ are also considered as divisors of $6000$ is
1. $40$
2. $50$
3. $60$
4. $30$
recategorized | 188 views
N= $6000$, we can write it in form of multiple of co-primes
N= $2^5* 3^1* 5^4$
No of divisors = $(5+1)(1+1)(4+1)= 60$
Option C) is correct
by Boss (16.4k points)
0
"we can write it in form of multiple of co-primes"
Actually they are primes. Every positive integer ($>1$) can be written as the product of primes having positive powers which is known as Fundamental Theorem of Arithmetic.
0
2^4 *3* 5^3
No of divisors = (5+1)(1+1)(3+1)=40
Option A is correct
6000 can be written as 2^4 * 5^3 *3.
therefore any divisor of 6000 can be formed by taking any combination of the above factors.
so for '2' there are 5 choices whether to include it or not in the divisor of 6000.
similarly for '5' there are 4 choices and for '3' there are 2 choices.
therefore total no of choices = 5*4*2 = 40.
there (A) is the correct answer.
by (129 points)
https://www.math.upenn.edu/~deturck/m170/wk2/numdivisors.html
You can find the factors as 6000 = $2^{4}*3*5^{3}$ then as per trick you add 1 to each exponent and multiply them, ie,
{4+1}*{1+1}*{3+1} = 40
by (23 points)
+1 vote |
Are the two equivalent or is Electromagnetic Radiation a subset of Radiation. I am further confused by the fact that electromagnetic radiation includes both ionizing and non ionizing types of radiation. The closest answer I can think of is that Electromagnetic radiation is the set of all forms of radiation except alhpa and beta particles, because they are not massless photons. The only site which I found which supports this is this one.
Could someone please give a definitive distinction of the two. I would also appreciate an exhaustive list of everything that is, and is not Electromagnetic Radiation.
It is a matter of confusing terminology , at the present times when so much differentiation has happened in physics related scientific disciplines. |
Poisson, Guinand, Meyer
[Posted on march 27, 2016]
A recent highly interesting paper by Yves Meyer (PNAS paywalled, local version at ENS Cachan, and seminar notes) constructs explicitly new Poisson-type summation formulas (building on previous little known work of Andrew-Paul Guinand and an existence result of Nir Lev and Aleksander Olevskii) : the big difference with Poisson summation is that the new formulas do not have support on a lattice but only on a locally finite set (and then provide new examples of crystalline measures).
Since these new results involve some arithmetic (see below) I’ve asked over at MO whether this was known to number theorists, but there hasn’t been any immediate answer, so perhaps not and there’s probably room for interesting further work on the topic.
To state things very explicitely (for my own benefit, but also just for the beauty of it), here are the formulas taken directly from Meyer’s paper :
Poisson (Dirac comb case): on a lattice $\Gamma\subset\mathbb{R}^n$ and its dual $\Gamma^*$ we have for any function $f$ in the Schwartz class $\mathcal{S}(\mathbb{R}^n)$ that
$\displaystyle \mbox{vol}(\Gamma) \sum_{\gamma\in\Gamma}f(\gamma) = \sum_{\eta \in\Gamma^*}\widehat{f}(\eta)$
Poisson (corollary of Dirac comb case) : for every $\alpha,\beta \in\mathbb{R}^n$ we have (in terms of distributions to make the comb more explicit still)
$\displaystyle \mbox{vol}(\Gamma) \sum_{\gamma\in\Gamma +\alpha} e^{2i\pi\beta .\gamma}\delta_{\gamma} = e^{2i\pi\alpha .\beta} \sum_{\eta \in\Gamma^* +\beta} e^{2i\pi\alpha .\eta}\delta_{\eta}$
Guinand : define for any $n\in\mathbb{N}$ the number of sums of three squares that equal to $n$ by $r_3(n)$ (by Legendre’s theorem this is possible only for those $n$ not of the form $4^j(8k+7)$). Then introducing Guinand’s distribution (acting on functions of the variable $t$)
$\displaystyle \sigma := -2\frac{d}{dt}\delta_0 + \sum_{n=1}^{+\infty} \frac{r_3(n)}{\sqrt{n}} (\delta_{\sqrt{n}}-\delta_{-\sqrt{n}})$
then we have $\langle \sigma ,f\rangle = \langle -i\sigma ,\widehat{f}\rangle$.
Meyer (first example) : introducing the function $\chi$ on $\mathbb{N}$ (this is a clash of notation with Dirichlet characters) by $\chi(n)=-\frac{1}{2}$ when $n\not\equiv 0\pmod{4}$, $\chi (n)=4$ when $n\equiv 4\pmod{16}$ and $\chi (n)= 0$ when $n\equiv 0\pmod{16}$ then with the distribution
$\displaystyle \tau := \sum_{n=1}^{+\infty} \frac{\chi(n)r_3(n)}{\sqrt{n}} (\delta_{\frac{\sqrt{n}}{2}}-\delta_{-\frac{\sqrt{n}}{2}})$
we have $\langle \tau ,f\rangle = \langle -i\tau ,\widehat{f}\rangle$.
The support of $\sigma$ and $\tau$ are thus defined as subsets of $\{\pm\sqrt{n}|n\in\mathbb{N}\}$ and $\{\pm\frac{\sqrt{n}}{2}|n\in\mathbb{N}\}$ by their respective arithmetic conditions, and thus are definitely not equally spaced lattice points.
Meyer (second example) : with the distribution
$\displaystyle \rho := 2\pi\delta_{\frac{1}{2}} +2\pi\delta_{-\frac{1}{2}} + \sum_{n=1}^{+\infty} \frac{\sin(\pi\sqrt{n})r_3(n)}{\sqrt{n}} (\delta_{\frac{\sqrt{n}}{2}+\frac{1}{2}}+\delta_{\frac{\sqrt{n}}{2}-\frac{1}{2}}+\delta_{-\frac{\sqrt{n}}{2}+\frac{1}{2}}+\delta_{-\frac{\sqrt{n}}{2}-\frac{1}{2}} )$
we have $\langle \rho ,f\rangle = \langle \rho ,\widehat{f}\rangle$ (very nice!).
There are several other examples in Meyer’s paper, as well as higher-dimensional constructions (that I haven’t absorbed yet, so I’ll stop here).
Update (march 27): two relevant papers I’ve just found
• On the Number of Primitive Representations of Integers as Sums of Squares by Shaun Cooper and Michael Hirschhorn published in Ramanujan J (2007) 13:7–25, which in particular provides the explicit formula $r_3(n)=\sum_{d^2|n}r_3^p\big ( \frac{n}{d^2}\big )$ where the function $r_3^p$ is in turn explicited ($p$ is a label standing for ‘primitive’)
• Irregular Poisson Type Summation by Yu. Lyabarskii and W.R. Madych, published in SAMPLING THEORY IN SIGNAL AND IMAGE PROCESSING Vol. 7, No. 2, May 2008, pp. 173-186, which does prove a Poisson-type formula with irregularly spaced sampling points (but if I understand well the examples they mention at the end show it is still different from the results of Guinand and Meyer, to be confirmed)
Advertisements |
# How do you find the Vertical, Horizontal, and Oblique Asymptote given y = (2x+4)/( x^2-3x-4)?
Jun 28, 2016
so we have vertical asymptotes at $x = - 1 , 4$
for horizontal and slope asmptotes, ${\lim}_{x \setminus \to \pm \infty} y = 0$
#### Explanation:
for vertical aympptotes, we look at when the demoninator is zero
so
${x}^{2} - 3 x - 4 = \left(x + 1\right) \left(x - 4\right) \implies x = - 1 , 4$
to check for possible indeterminates we note that
$y \left(- 1\right) = \frac{2}{0}$ = ndef
and
$y \left(4\right) = \frac{12}{0}$ = ndef
so we have vertical asymptotes at $x = - 1 , 4$
for horizontal and slope we look at the behaviour of the function as $x \setminus \to \pm \infty$
so we re-write
${\lim}_{x \to \pm \infty} \frac{2 x + 4}{{x}^{2} - 3 x - 4}$
as
${\lim}_{x \to \pm \infty} \frac{\frac{2}{x} + \frac{4}{x} ^ 2}{1 - \frac{3}{x} - \frac{4}{x} ^ 2} = 0$ |
I am doing some BER simulations with GNU Radio (stay tuned for the next post), and during my experiments I have stumbled upon a bug in the “Decode CCSDS 27″ block. This block is a Viterbi decoder for the CCSDS convolutional code with $$r=1/2$$, $$k=7$$ (note that the convention used by this block is first POLYA then POLYB so it doesn’t match the NASA-DSN convention nor the CCSDS/NASA-GSFC conventions, as I have mentioned in another post).
The bug consists in the block entering a “degraded” state after it has processed many symbols (on the order of several millions). In this degraded state, it doesn’t decode properly, producing lots of bit errors even if no input symbols are in error. Fortunately, there is another block in GNU Radio which can decode the CCSDS convolutional code, the “CC Decoder” included in FECAPI. This block doesn’t seem to suffer this issue. Here I describe how to replicate the bug, how to replace “Decode CCSDS 27” by “CC Decoder” and some other miscellaneous things related to this bug.
## Viterbi decoding for NanoCom U482C
The NanoCom U482C is a a transceiver made by GOMspace intended for cubesats and other small satellites. Currently, it seems to be out of production, since it has been superseded by the newer NanoCom AX100, but nevertheless the U482C is being flown in new satellites, such as the QB50 AU03 INSPIRE-2. The U482C is also used in GOMspace’s cubesat GOMX-1, so we may say that GOMX-1 is the reference satellite for U482C.
My gr-satellites project includes a partially reverse-engineered U482C decoder which is able to decode GOMX-1 and several other satellites. It does CCSDS descrambling and Reed-Solomon decoding. Recently, Jan PE0SAT made a recording of INSPIRE-2. I tried to decode it with gr-satellites and although the signal was very good, the Reed-Solomon decoder failed. The history behind this recording is interesting. After being released from the ISS near the end of May, INSPIRE-2 wasn’t transmitting as it should. The satellite team got in contact with Amateurs having powerful stations to try to telecommand the satellite and get it transmitting. Eventually, the CAMRAS 25m dish was used to telecommand and activate INSPIRE-2. Later, Jan made a recording from his groundstation.
After exchanging some emails with the satellite team, I learnt that the U482C also supports an $$r=1/2$$, $$k=7$$ convolutional code, which is used by INSPIRE-2 but not by other satellite I’ve seen. I have added Viterbi decoding support for the U482C decoder in gr-satellites, so that INSPIRE-2 can now be decoded. Here I describe some details of the implementation.
## Decoding AO-40 uncoded telemetry
AO-40 is an Amateur satellite that was active between 2000 and 2004. It had several transponders and beacons covering many bands from HF to microwave and its position on a HEO orbit provided several consecutive hours of coverage each day and allowed long distance contacts. Since then, many interesting things have happened with Amateur satellites, particularly the high increase of the number of cubesats that is happening over the last few years, but even so, we haven’t seen again any other satellite with the characteristics of AO-40 nor it is to be expected in the near future.
I was quite young when AO-40 was operational, so for me this is all history. However, Pieter N4IP has posted recently on Twitter some IQ recordings of AO-40 that he made back in 2003. I have been playing with these recordings to see how AO-40 was like. One of the things I’ve dong is to write my own telemetry decoder using GNU Radio.
AO-40 transmitted telemetry using 400bps BPSK. There were two modes: an uncoded mode which used no forward error correction and an experimental FEC mode proposed by Phil Karn KA9Q. The FEC mode was used later in the FUNcube satellites, and I’ve already talked about it in a previous post. The beacon in Pieter’s recordings is in uncoded mode. Here I describe this mode in detail and how my decoder works. The decoder and a small sample taken from Pieter’s recordings have already been included in gr-satellites.
## Low latency decoder for LilacSat-1
LilacSat-1 is one of the QB50 project cubesats. It will be released tomorrow from the ISS. The most interesting aspect of this satellite is that it has an Amateur Radio transponder with an FM uplink on the 2m band and a Codec2 1300bps digital voice downlink on the 70cm band. It is the first time that an Amateur satellite really uses digital voice, as previous tests have only used an analog FM repeater to relay D-STAR and similar digital voice modes. LilacSat-1 however implements a Codec2 encoder in software using its ARM processor. I have talked about LilacSat-1 Codec2 downlink already in this blog. Here I present a low latency decoder for the digital voice downlink that I have recently included in gr-satellites.
## gr-satellites refactored
In August last year I started my gr-satellites project as a way to make my experiments in decoding Amateur Satellite telemetry easier to use for other people. Since then, gr-satellites has become a stable project which supports 17 satellites using several different protocols. However, as time has gone by, I have been adding functionality in new GNU Radio OOT modules. Eventually, the core of gr-satellites depended on 5 OOT modules and another 7 OOT modules were used for each of the satellite families. This makes gr-satellites cumbersome to install from scratch and it also makes it difficult to track when each of the OOT modules is updated.
I have now refactored gr-satellites and included most of the OOT modules into gr-satellites, so that it is much easier to install and update. The only OOT modules I have kept separate are the following:
• gr-aausat, because it doesn’t use libfec for FEC decoding, and includes its own implementation of a Viterbi and RS decoder. Eventually I would like to modify gr-aausat to make it use libfec and include it into gr-satellites.
• beesat-sdr, because it is actively developed by TU Berlin and I have collaborated some code with them. Also, the implementation of the decoder is quite different from everything else in gr-satellites.
• gr-lilacsat, because it is actively developed by Harbin Institute of Technology and I have collaborated some code with them. However, as I explained in a previous post, the FEC decoding for these satellites is now done very differently in gr-satellites in comparison to gr-lilacsat, as I have replaced many custom blocks by stock GNU Radio blocks. I will have to examine carefully how much code from gr-lilacsat is actually needed in gr-satellites.
The refactored version is already available in the Github repository for gr-satellites. Users updating from older versions should note that gr-satellites is now a complete GNU Radio OOT module instead of a collection of GRC flowgraphs, so it should be built and installed with cmake as usual (see the README). The GRC flowgraphs are in the apps/ folder.
The OOT modules that have been included into gr-satellites will be deprecated and no longer developed independently. I will leave their Github repositories up with a note pointing to gr-satellites.
This is not the end of the story. There are some more things I want to do with gr-satellites in the next few weeks:
• Use cmake to build and install hierarchical flowgraphs, saving the user from this cumbersome step.
• Use cmake to build the python scripts associated to the decoders.
• Collect in a Git submodule the sample WAV files that are scattered across the different OOT modules. Add WAV samples for missing satellites. Use these WAVs to test decoders, perhaps even with some automation by a script.
And of course, there are many QB50 project satellites being launched starting next week. I’ll try to keep up and add decoders for them, especially for the ones using not so standard modes. I already have a working decoder for Duchifat-2, since I have been collaborating with their team at Herzliya Space Laboratory. I will also adapt the LilacSat-1 decoder from gr-lilacsat. This decoder has already been featured in this blog.
## Calibrating the Hermes-Lite 2.0 beta2 in Linrad
Lately, I have been trying to make an amplitude and phase calibration of my Hermes-Lite 2 beta2 in order to use Linrad’s smart noise blanker. This is quite a task because Linrad doesn’t support the Hermes-Lite 2 directly. Today I’ve finally managed to do it. Here I describe all my setup and calibration results.
During the last few days I’ve been experimenting with feeding signals from GNU Radio into Linrad using Linrad’s network protocol. Linrad has several network protocols designed to share data between different instances of Linrad, but generally these protocols are only supported by Linrad itself. The only other example I know of is MAP65, which can receive noise-blanked data from Linrad using the timf2 format.
Another possible use of gr-linrad is as an instrumentation for any kind of GNU Radio flowgraph. It is very easy to stream data into Linrad, so it can be used as a very nice waterfall display or to do any sort of signal processing, such as noise blanking or adaptive polarization. Here I describe how to get the test flowgraph in gr-linrad working and some aspects of the network protocol.
## GNU Radio decoder for AO-73
During the last few days, I have been talking with Edson PY2SDR about using GNU Radio to decode digital telemetry from AO-73 (FUNcube-1) and other FUNcube satellites. I hear that in Virginia Tech Groundstation they have a working GNU Radio decoder, but it seems they never published it.
The modulation that the FUNcube satellites use is DBPSK at 1200baud. The coding is based on a CCSDS concatenated code with a convolutional code and Reed-Solomon, but it makes extensive use of interleaving to combat the fading caused by the spin of the spacecraft. This system was originally designed by Phil Karn KA9Q for AO-40. Phil has a description of the AO-40 FEC system in his web and there is another nice description by James Miller G3RUH.
I took a glance at this documents and noted that it would be a nice and easy exercise to implement a decoder in GNU Radio, as I have most of the building blocks that are needed already working as part of gr-satellites. Today, I have implemented an out-of-tree module with a decoder for the AO-40 FEC in gr-ao40. There is another gr-ao40 project out there, but it seems incomplete. For instance, it doesn’t have any code to search for the syncword. I have also added decoders for AO-73 and UKube-1 to gr-satellites.
The signal processing in gr-ao40 is as described in the following diagram taken from G3RUH’s paper.
First, the distributed syncword is searched using a C++ custom block. It is possible to set a threshold in this block to account for several bit errors in the syncword. De-interleaving is done using another C++ custom block. For Viterbi decoding, I have used the “FEC Async Decoder” block from GNU Radio, since I like to use stock blocks when possible. Then, CCSDS descrambling is done with a hierarchical block from gr-satellites. Finally, the interleaved Reed-Solomon decoders are implemented in a C++ custom blocks that uses Phil Karn’s libfec.
The complete FEC decoder is implemented as a hierarchical block as show in the figure below.
## Coding for HIT satellites (and other CCSDS satellites)
The Harbin Institute of Technology satellites LilacSat-2, BY70-1 and the upcoming LilacSat-1 all use a concatenated code with an $$r=1/2, k=7$$ convolutional code and a (255,223) Reed-Solomon code according to the CCSDS TM Synchronization and Channel Coding blue book specifications. The GNU Radio decoder gr-lilacsat by Wei BG2BHC includes a custom implementation of the relevant part of the CCSDS stack, probably ported into GNU Radio from some other software.
Recently, I have been working on decoding KS-1Q and I’ve seen that it uses the same CCSDS coding as the HIT satellites. This has made me realise that most of this CCSDS coding can be processed using stock GNU Radio blocks, without the need for custom blocks. The only exception is Reed-Solomon decoding. This can be done easily with gr-libfec, which provides an easy interface from GNU Radio to Phil Karn’s libfec. Here I look at the details of the CCSDS coding and how to process it with GNU Radio. I’ve updated the decoders in gr-satellites to use this kind of processing. I’ll also talk about the small advantages of doing it in this way versus using the custom implementation in gr-lilacsat.
## KS-1Q decoded
In a previous post, I talked about my attempts to decode KS-1Q. Lately, WarMonkey, who is part of the satellite team, has been giving me some extra information and finally I have been able to decode the packets from the satellite. The decoder is in gr-ks1q, together with a sample recording contributed by Scott K4KDR. I’ve also added support for KS-1Q in gr-satellites. Here I look at the coding of the packets in more detail. |