text
stringlengths 104
605k
|
---|
# How to integrate ${x^3}/(x^2+1)^{3/2}$?
How to integrate $$\frac{x^3}{(x^2+1)^{3/2}}\ \text{?}$$
I tried substituting $x^2+1$ as t, but it's not working
• To be honest that is the approach to use..it would be better to show where you got lost. Jul 29, 2015 at 14:12
• It's OK I got my mistake :) @Chinny84
– user220382
Jul 29, 2015 at 14:18
• Use $x^3=x(x^2+1)-x$.
– user65203
Jul 29, 2015 at 14:42
$$\int\frac{x^3}{(x^2+1)^{3/2}}dx$$
$u:=x^2,du=2xdx$
$$=\frac{1}{2}\int\frac{u}{(u+1)^{3/2}}du$$
$s:=u+1,ds=du$
$$=\frac{1}{2}\int\frac{s-1}{s^{3/2}}ds$$
$$=\sqrt s+\frac{1}{\sqrt s}$$
$s=u+1,u=x^2$
$$=\sqrt{x^2+1}+\frac{1}{\sqrt{x^2+1}}+C$$
$$\boxed{\color{blue}{=\frac{x^2+2}{\sqrt{x^2+1}}+C}}$$
Alternative approach:
Let $x=\tan\theta$, $dx=\sec^2\theta d\theta$
\begin{align} \int\frac{x^3dx}{(x^2+1)^{3/2}}&=\int\frac{\tan^3\theta\cdot\sec^2\theta d\theta}{\sec^3\theta}\\&=\int\frac{\sin^3\theta d\theta}{\cos^2\theta}\\&=-\int\frac{\sin^2\theta d\cos\theta}{\cos^2\theta}\\&=\int1-\sec^2\theta d\cos\theta\\&=\cos\theta+\sec\theta+C\\&=\frac1{\sqrt{1+x^2}}+\sqrt{1+x^2}+C \end{align}
$$\int \frac{x^3}{(x^2+1)^{3/2}}dx=\int \frac{x(x^2+1)-x}{(x^2+1)^{3/2}}dx=\int \frac{x}{(x^2+1)^{1/2}}dx-\int \frac{x}{(x^2+1)^{3/2}}dx\\ =\sqrt{x^2+1}+\frac1{\sqrt{x^2+1}}+C$$
$x^2+1=t^2\Rightarrow x\,dx=t\,dt\;,\;x^2=t^2-1$ $$\int\frac{x^3}{(x^2+1)^{\frac{3}{2}}}dx=\int\frac{t^2-1}{t^2}dt=\int(1-t^{-2})dt=t+\frac{1}{t}+C$$ $t=\sqrt{1+x^2}\Rightarrow$ answer$=\sqrt{1+x^2}+\frac{1}{\sqrt{1+x^2}}+C=\frac{x^2+2}{\sqrt{x^2+1}}+C$
Let $x^2+1=t\implies 2xdx=dt$ or $xdx=\frac{dt}{2}$ $$\int \frac{x^3 dx}{(x^2+1)^{3/2}}=\frac{1}{2}\int \frac{(t-1)dt}{(t)^{3/2}}$$ $$=\frac{1}{2}\int \frac{(t-1)dt}{t^{3/2}}$$ $$=\frac{1}{2}\int (t^{-1/2}-t^{-3/2})dt$$ $$=\frac{1}{2}\left(2t^{1/2}+2t^{-1/2}\right)$$ $$=\sqrt{x^2+1}+\frac{1}{\sqrt{x^2+1}}+C$$ $$=\frac{x^2+2}{\sqrt{x^2+1}}+C$$ |
# Neves (video game)
Neves
Developer(s) Yuke's
Publisher(s) Yuke's
Platform(s)
Release date(s) Nintendo DS
• JP November 15, 2007
• NA November 6, 2007
• EU March 28, 2008
WiiWare
• JP May 26, 2009
• NA June 22, 2009
• EU June 11, 2010
Genre(s) Puzzle
Mode(s) Single player
Multiplayer
Neves (ハメコミ LUCKY PUZZLE DS Hamekomi Rakkī Pazuru Dīesu?) is a puzzle game developed by Yuke's Media Creations for the Nintendo DS, based on the Japanese Lucky Puzzle, a tangram-like dissection puzzle. In the game, players use the stylus to move, rotate and flip pieces on the DS's touch screen to clear puzzles. It features over 500 different puzzles from which to choose.
A sequel, Neves Plus (ハメコミ LUCKY PUZZLE Wii Hamekomi Rakkī Pazuru Uī?), was released for WiiWare in Japan on May 26, 2009, in North America on June 22, 2009 and in Europe on June 11, 2010.[1]
## Gameplay
In each puzzle the player is given an image which they then must try to recreate using only the following seven pieces:
• two identical right isosceles triangles, with sides length $\scriptstyle{1/2}$ and hypotenuse of length $\scriptstyle{1/\sqrt{2}}$
• four right trapezoids of various sizes - the side lengths are given with the first three sides creating the two right angles:
• two have side lengths $\scriptstyle{1/4}$, $\scriptstyle{1/4}$, $\scriptstyle{1/2}$, and $\scriptstyle{1/2\sqrt{2}}$
• one has side lengths $\scriptstyle{1/2\sqrt{2}}$, $\scriptstyle{1/2\sqrt{2}}$, $\scriptstyle{1/\sqrt{2}}$, and $\scriptstyle{1/2}$
• one has side lengths $\scriptstyle{1/\sqrt{2}}$, $\scriptstyle{1/2\sqrt{2}}$, $\scriptstyle{3/2\sqrt{2}}$, and $\scriptstyle{1/2}$
• one home plate-like pentagon, with sides $\scriptstyle{1/2}$, $\scriptstyle{1/2}$, $\scriptstyle{1/2}$, $\scriptstyle{1/2\sqrt{2}}$, and $\scriptstyle{1/2\sqrt{2}}$
The seven pieces form a rectangle of length 1 by 5/4.
## Game modes
• Tutorial: Learn the controls and how to play.
• Silhouettes?: Standard play mode. Solve puzzles with no time restriction.
• Time Pressure: Mode for experienced players. Stages have a time limit of 3 minutes. Players must try to solve each puzzle in less than a minute.
• 7 steps: Advanced mode for expert players. The player must place each piece perfectly with no room for mistakes.
With the Bragging Rights mode, multiple Neves players can compete with others who do not have a copy of the game via DS Download Play.
## WiiWare version
Players of the WiiWare version of the game called NEVES Plus, (known as NEVES Plus: Pantheon of Tangrams in Europe) use the Wii Remote to grab and rotate puzzle pieces, with up to four players being able to solve the puzzle simultaneously or compete against each other in teams of two. The game also features new multiplayer modes and an Ancient Egypt-themed setting.[1] A sequel of the game called NEVES Plus Returns is already confirmed for a release on the WiiWare. |
# Generalisation of the Fourier Transform
If we define a Phase-shift operator as follows $$\Phi_{(a,b)}[f(x)] = e^{i b (x - a /2)} f(x-a)$$ and recall the following property of the Fourier transform $$F[e^{i a (x+ b /2)} f(x+b)] = e^{ib (\omega - a/2)}\hat{f}(\omega-a)$$ we see that these operators satisfy the relation $$\Phi_{(-b,a)} = F^{-1} \circ \Phi_{(a,b)} \circ F$$ or rather, that conjugation by the Fourier transform switches the action of phase multiplication and a shift.
It's not really a ''switch'' though, its a $$\pi/2$$ rotation in the argument space $$(a,b)$$. I.e., using $$R_\theta = \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}$$ we find $$\Phi_{R_{\pi/2}(a,b)} = F^{-1} \circ \Phi_{(a,b)} \circ F.$$ Is anything known about the operators $$T_\theta$$ which satisfy corresponding relations for different rotations $$\Phi_{R_\theta(a,b)} = T_\theta^{-1} \circ \Phi_{(a,b)} \circ T_\theta.$$ Do these have a name? Or known properties? Can $$T_\theta[f]$$ be calculated for generic functions and generic $$\theta$$? Or even for special non-trivial $$\theta$$ other than integer multiples of $$\pi/2$$?
I can see that for $$\theta=2\pi/n$$ that $$T_\theta^n = \mathrm{identity}$$ and that $$T_\theta$$ should be a linear transformation, and I am interested in what other properties they may be known to have. |
# Calculation of prime numbers making use of Parallel.ForEach
In my spare time I decided to write a program that would systematically identify prime numbers from 2 to 18,446,744,073,709,551,615. This is for fun and learning, as I know it will take too long to actually ever reach the upward value, but I'm using this to explore parallel processing. I know this is not a traditional question but I very much would like the critique of my peers. I know this can been torn apart, so please do, but if you do, do so constructively.
The program is designed to run until the user hits the esc key; at which time it will generate a file with all of the prime numbers discovered. The path to this file needs to be configured to a value for your directory structure. When I restart the program it will accept, as an argument, a primes text file, reading it in and starting from where it left off. The parallel processing portion and implementing the sieve for finding primes is what I was interested in woodshedding.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
namespace Prime
{
public class Program
{
public static void Main(string[] args)
{
List<UInt64> primes = new List<UInt64>();
UInt64 numberToCheck = 3;
if (args.Count() > 0)
{
numberToCheck = ReadPrimesToList(args[0].ToString(), out primes) +2;
}
try
{
bool quit = false;
Console.WriteLine("Prime Number Search");
while (!quit)
{
if (Console.KeyAvailable)
Console.Write("Processing: " + numberToCheck);
if (CheckForPrime(numberToCheck, primes))
{
Console.WriteLine(" Prime Found!");
}
else
Console.WriteLine(" Not Prime :(");
if (numberToCheck < UInt64.MaxValue)
numberToCheck+=2;
else
break;
}
Console.WriteLine("Exiting");
WritePrimesToFile(primes);
Console.WriteLine("< Press Any Key To Exit >");
}
catch
{
if (primes.Count > 0)
WritePrimesToFile(primes);
}
}
private static UInt64 ReadPrimesToList(string fileName, out List<UInt64> primes)
{
primes = new List<UInt64>();
FileInfo file = new FileInfo(fileName);
String lineIn = String.Empty;
{
String[] numberStrings = lineIn.Split(new char[] {' '}, StringSplitOptions.RemoveEmptyEntries);
foreach (String numberString in numberStrings)
{
}
}
return primes[primes.Count() - 1];
}
private static void WritePrimesToFile(List<UInt64> primes)
{
String dateAndTime = DateTime.Now.ToString("yyyyMMddhhmm");
String fileName = String.Format(@"<substitute your path here>\primes [{0}].txt", dateAndTime);
FileInfo file = new FileInfo(fileName);
using (StreamWriter writer = file.CreateText())
{
int maxLength = primes[primes.Count - 1].ToString().Length;
String line = String.Empty;
const int maxColumn = 16;
int column = 0;
foreach (UInt64 number in primes)
{
string numberString = number.ToString();
int numberLength = numberString.Length;
line += numberString.PadLeft(maxLength, ' ') + ((column < (maxColumn-1)) ? " " : String.Empty);
column++;
if (column == maxColumn)
{
writer.WriteLine(line);
line = string.Empty;
column = 0;
}
}
if (line.Length > 0)
writer.WriteLine(line);
writer.Flush();
writer.Close();
}
}
private static bool CheckForPrime(UInt64 numberToCheck, List<UInt64> primes)
{
if ((numberToCheck % 2) == 0)
return false;
UInt64 halfway = (UInt64)(Math.Ceiling((float)numberToCheck / 2F));
bool isprime = false;
UInt64 factor = 0;
Parallel.ForEach<UInt64>(primes, (prime, loopState) =>
{
if (prime > halfway)
{
isprime = true;
loopState.Stop();
}
if ((numberToCheck % prime) == 0)
{
factor = prime;
isprime = false;
loopState.Stop();
}
});
return (isprime && factor == 0);
}
}
}
-
## migrated from stackoverflow.comFeb 17 '11 at 21:06
This question came from our site for professional and enthusiast programmers.
Answers so far failed to inform you that this code is completely wrong:
if (prime > halfway)
{
isprime = true;
loopState.Stop();
}
You can't do that on Parallel.Foreach - there's no guarantee as to the order of execution. That's the whole point of the parallel loop!
if (prime > halfway)
{
loopState.Break(); // join all 'previous' jobs
return; // terminate *this* job, not caring about others
}
Another serious flaw is the condition itself - that fails as soon as checking 3. Should be:
if (prime >= halfway)
Another important thing, is that for this specific task, plain good ol sequential for(...) is very likely to outperform the parallel version.
My results for 1234567890, tested on Mono: (stupid me)
My results for 2^31 - 1, tested on Mono:
parallel: 5.3873 [ms], returned True
sequential: 1.0157 [ms], returned True
2147483647 : True
EDIT: I see some reports suggesting the Parallel.ForEach implementation on Mono is exceptionally slow - would be nice to have some alternative results from win dudes.
I hit the memory limits running from 2 to 1339484197
Let's make a fun math game:
1. The number of primes less than N is approximated by N / ln(N).
2. Our maximum list capacity is 2GB (32/64 bit alike). If you run on 32 bit, your whole process available memory is 2GB altogether. So you cannot even have that.
3. We're using long, so each prime occupies 8 bytes.
4. And to add more evil, we didn't allocate list size in advance, so we're operating on doubling mode. That means every time we have 2^N elements, we're allocating space for 2*2^N more, thus momentarily using 3x the space needed for the actual list. So what happened here?
5. We were at N = 1339484197, so the list had ~ N/ln(N) elements => ~ 64M primes
6. Each prime takes 8 bytes, so we're eating ~500MB of memory.
7. Now we add on more item and need to double, so we have to allocate 1GB more. That's 1.5GB altogether. Too much.
8. Now to the good news: We can get x3 primes more just by passing a MAX_SIZE to the List c'tor. Indeed, we can get x6, as one side implication of the above math game, is that we can safely use UInt32.
-
You can use AsOrdered() though. – codesparkle Nov 13 '12 at 22:58
Console.Write is horribly slow. I mean it's not that bad, but it's worse than you might think.
Try something like:
if((numberToCheck + 1) % 1000 == 0)
Console.Write("Processing: " + numberToCheck);
I've had many cases where updating the console less often resulted in a massive speed boost. A good rule is not to update the console more than a few times per second.
-
Actually using the console too much can result in (afaik uncatchable) exceptions, especially when used in the main application thread. – Fge Mar 2 '11 at 13:33
@Fge If that happens to you raise a bug. It should not be an assumption that you have to try to avoid using the console to avoid exceptions. – Paul Nov 14 '12 at 0:03
maybe also interesting here: stackoverflow.com/questions/21947452/… – Vogel612 May 14 '14 at 8:55
For checking a the primeness of several numbers, you should use Sieve of Eratosthenes. It is two simple loops you can parallelize, and the time complexity is just O(n log(n) log log (n)).
Also, as Hannesh said, writing to the console is incredibly slow, I guess you should probably avoid writing "Processing some number" and just wrute the last number processed at the end.
Console functions also create a bottleneck, specially if you check them after every number, I would check numbers in batches of 1000 or 10000 before asking if the user is pressing a key.
-
Here is a super cute realization which I've seen in a book. If you wish you could optimize it.
IEnumerable<int> numbers = Enumerable.Range(3, 100000);
var parallelQuery =
from n in numbers.AsParallel()
where Enumerable.Range(2, (int)Math.Sqrt(n)).All(i => n % i > 0)
select n;
int[] primes = parallelQuery.ToArray();
-
Hmmmm... To speed it up, I'd look into alternate Prime Number tests, like Miller Rabin Test or AKS Test
Here is a sample of code for the Miller Rabin algorithm written in C#. Maybe it can be parallelized and work faster than the method you currently have?
-
Well I hit a ceiling last night, the combined memory usage of primes from 2 to 1339484197 pushed the runtime to throw a System.OutOfMemoryException. So I'll have to write the primes out to a file on the fly, perhaps in batches, and only keep enough primes around to test up the SqRt; or look into implementing one of the alternate tests you suggest. – clichekiller Feb 18 '11 at 21:12
You could consider tweaking the prime test to bail out as soon as you reach a prime that is >= sqrt(numberToTest). The proof (very loosely) is that a composite number can always be written as the product of two integer factors, each > 1. Of the two factors, one must be necessarily <= the other. Thus you can stop testing when you reach the worst-case upper bound which is the square root of the test number. Other more efficient means exist for testing primes, but this tweak requires remarkably little code.
Here's a summary of my tweaks:
• Removed "halfway" and "factor" variables & their use. They may have had value for debugging purposes, but they seem to detract from the overall goal of the method.
• Defaulted isprime to true - if we make it all the way through the candidate factors without finding one that is divisible, then isprime = true is the correct value to return.
• Avoided computing the square root of numberToCheck by changing the condition to first square the candidate factor and compare the result with numberToCheck in the LINQ Where call. This could eat up time when testing much larger numbers if the repeated multiplications outweigh the cost of computing an integer square root.
• The manner I chose to filter the primes using the LINQ Where could have a negative performance impact based on how the TPL parititions the work. I recall reading a post from the TPL team that they partition the work differently for indexable IEnumerables, and I wanted to provide the link here, but I am having trouble locating it. In short, this might offset the performance improvement bailing out earlier might provide.
Regardless, here are the above tweaks (untested):
private static bool CheckForPrime(UInt64 numberToCheck, List<UInt64> primes)
{
if ((numberToCheck % 2) == 0)
return false;
bool isprime = true;
Parallel.ForEach(primes.Where(prime => prime*prime < numberToCheck),
(prime, loopState) =>
{
if ((numberToCheck % prime) == 0)
{
isprime = false;
loopState.Stop();
}
});
return isprime;
}
-
You can replace Where with TakeWhile since the primes are ordered. – CodesInChaos Nov 13 '12 at 22:10
@CodesInChaos - TakeWhile` is an excellent improvement. +1 – devgeezer Nov 14 '12 at 14:51 |
# [SOLVED]Coordinate transformation derivatives
#### topsquark
##### Well-known member
MHB Math Helper
I've had to hit my books to help someone else. Ugh.
Say we have the coordinate transformation $$\bf{x}' = \bf{x} + \epsilon \bf{q}$$, where $$\epsilon$$ is constant. (And small if you like.) Then obviously
$$d \bf{x}' = d \bf{x} + \epsilon d \bf{q}$$.
How do we find $$\frac{d}{d \bf{x}'}$$?
I'm missing something simple here, I'm sure of it.
-Dan
#### Fantini
##### "Read Euler, read Euler." - Laplace
MHB Math Helper
Perhaps I'm getting lost in notation, topsquark. I'm not sure I understand your question.
Let us assume that $\mathbf{x}', \mathbf{x}$ and $\mathbf{q}$ are in $\mathbb{R}^3$. Therefore we have that $d \mathbf{x}' = \text{rot } \mathbf{x}' = \nabla \times \mathbf{x}'$.
What are you trying to compute?
#### topsquark
##### Well-known member
MHB Math Helper
Perhaps I'm getting lost in notation, topsquark. I'm not sure I understand your question.
Let us assume that $\mathbf{x}', \mathbf{x}$ and $\mathbf{q}$ are in $\mathbb{R}^3$. Therefore we have that $d \mathbf{x}' = \text{rot } \mathbf{x}' = \nabla \times \mathbf{x}'$.
What are you trying to compute?
Simple example then. Say we have
$$x = e^t$$
Then
$$dx = e^t dt$$
Formally we have
$$\frac{d}{dx} = e^{-t} \frac{d}{dt}$$
I'm looking for something along those lines. Basically I am trying to simplify the operator
$$\frac{\partial }{ \partial ( \bf{x} + \epsilon \bf{q} )}$$
-Dan
#### Fantini
##### "Read Euler, read Euler." - Laplace
MHB Math Helper
Say we have
$$x = e^t$$
Then
$$dx = e^t dt$$
Formally we have
$$\frac{d}{dx} = e^{-t} \frac{d}{dt}$$
This is something I don't agree with. You seem to be saying that $dx = e^t dt$ implies $\frac{1}{dx} = e^{-t} \frac{1}{dt}$.
#### ZaidAlyafey
##### Well-known member
MHB Math Helper
$$dx = e^t dt$$
Formally we have
$$\frac{d}{dx} = e^{-t} \frac{d}{dt}$$
$$\displaystyle \frac{d}{dx}$$ what do you mean by that ?
we know that $$\displaystyle \frac{d}{dx} ( f(x) ) = f'(x)$$
#### Jester
##### Well-known member
MHB Math Helper
Are you looking for how derivatives transform under infinitesimal transformations?
#### topsquark
##### Well-known member
MHB Math Helper
This is something I don't agree with. You seem to be saying that $dx = e^t dt$ implies $\frac{1}{dx} = e^{-t} \frac{1}{dt}$.
Not quite.
$$dx = e^t dt \implies \frac{d}{dx} = \frac{d}{e^t dt} = e^{-t} \frac{d}{dt}$$
And I did say "formally." I understand there are technical issues with this "derivation." It's a substitution technique I was taught when solving Euler differential equations.
-Dan
- - - Updated - - -
Are you looking for how derivatives transform under infinitesimal transformations?
Originally yes. But certainly there is an approach to simplify the operator in the "macroscopic" case?
-Dan
#### Jester
##### Well-known member
MHB Math Helper
So if you're transforming from $(t,x,u) \rightarrow (t'x',u')$ where $t$ and $x$ are the independent variables and $u$ the dependent variable, you could use Jacobians.
#### Klaas van Aarsen
##### MHB Seeker
Staff member
Sounds indeed as if you refer to the Jacobian matrix, which in this case is the identity matrix.
Assuming you mean that $$\displaystyle \frac d {d\mathbf x'} = \left(\frac \partial{\partial x_1'}, ..., \frac \partial{\partial x_n'}\right)$$, that would be the same as $$\displaystyle \frac d {d\mathbf x}$$
#### Jester
##### Well-known member
MHB Math Helper
Not what I meant. Suppose that
$\bar{t}=t+T(t,x,u)\varepsilon + O(\varepsilon^{2}),$
$\bar{x}=x+X(t,x,u)\varepsilon + O(\varepsilon^{2}),\;\;\;(1)$
$\bar{u}=u+U(t,x,u)\varepsilon + O(\varepsilon^{2}),$
and I wish to calculate $\displaystyle \dfrac{\partial \bar{u}}{\partial \bar{t}}$ then an easy way is to use Jacobians. I.e.
$\displaystyle \dfrac{\partial \bar{u}}{\partial \bar{t}} = \dfrac{\partial(\bar{u},\bar{x})}{\partial(\bar{t},\bar{x})} = \dfrac{\partial(\bar{u},\bar{x})}{\partial(t,x)} / \dfrac{\partial(\bar{t},\bar{x})}{\partial(t,x)}\;\;\;(2)$.
Now insert the transformations (1) into (2) and expand. The nice thing about (2) is that it is easy to calculate. Furthermore, a Taylor expansion for small $\varepsilon$ is also fairly straight forward (if this is what topsquark was thinking)
#### Klaas van Aarsen
##### MHB Seeker
Staff member
Ah. I sort of assumed $\mathbf q$ was a constant.
I guess I shouldn't have, since you did refer to $d\mathbf q$, suggesting $\mathbf q$ depends on $\mathbf x$.
#### topsquark
##### Well-known member
MHB Math Helper
To all that mentioned Jacobians: (sighs) Yup. That's what I was looking for. I thought it would be something simple!
Thanks to all!
-Dan |
If you've played around with the Lightning Network, you've most definitely come across a BOLT 11 invoice. It's usually encoded as a QR code for conveniently scanning with your phone's camera. It includes things like the receiving node's public key, an amount with a unit suffix, a payment hash, a note, an expiry, maybe some routing hints, etc. These invoices are generated on-the-fly by the receiving node that generates a payment preimage which it keeps secret until the HTLCs have made their way across the route.
If we wanted to enable Lightning payments on a vending machine, for example, you might think that you'd need integrate an always-online node within the vending machine, or have it connected to some remote node to generate the invoices, or even duplicate the private key of the receiving node. However, with the help of ECDH, described in a message in the Lightning dev mailing list, we don't need the vending machine to be online at all!
There are a few key elements to making this work.
Elliptic-curve Diffie-Hellman (ECDH)
ECDH allows us to use elliptic-curve public–private key pairs to establish a shared secret.
Suppose Alice has an ECDSA secp256k1 public key A and private key a, and Bob has public key B and private key b. Alice can compute the point aB, and Bob can compute the point bA.
Note that if G is the generator point for the elliptic curve, then:
aB = a(bG) = (ab)G = (ba)G = b(aG) = bA
(By the corresponding associativity and commutativity rules of the associated groups)
So aB (or equivalently, bA) is the shared secret that could be used for symmetric encryption if desired (taking the x-coordinate for example).
This provides an opportunity for us to do something interesting.
Let's say that thr vending machine is an "offline" node with that generates an ephemeral private key k and corresponding public key K (node ID). Suppose there is also an "online" node with publick key N that is owned by the vending machine company. The vending machine then generates an invoice with some routing hints that the payment should go through node N.
So we declare N as an intermediary routing node for K, which is actually impossible to route to since it is offline / disconnnected from the Lightning Network.
This is where the trick comes in. The preimage is specifically constructed as hmac-sha256(x, amount), where x is the shared secret between N and K (x-coordinate of kN. The amount is stored in the 8 byte short channel ID between N and K. A channel that does not really exist.
Routing to a non-existent node
Now a customer walks up to the vending machine, and being a forward-thinking #LaserRaysUntil100k human, decides to pay with their non-custodial mobile Lightning wallet. They scan the BOLT 11 invoice generated by the vending machine and their wallet (node) picks up the routing hints. It then attempts to route the payment to K along some route via N.
Now when N is asked to route a payment to an unknown node, it calculates the ECDH shared secret x. It then uses this, along with the amount encoded in the short channel ID to reconstruct a preimage with hmac-sha256(x, amount). The payment hash is derived from that and if it matches the payment hash in the accepted HTLC, then it can claim the amount.
I plan to make a little Web simulation that demonstrates this a bit better at some stage! Until then, ride the Lightning! |
## Online Encylopedia and Dictionary Research Site
Online Encyclopedia Search Online Encyclopedia Browse
# Sediment
Sediment is any particulate matter that can be transported by fluid flow and which eventually is deposited as a layer of solid particles on the bed or bottom of a body of water or other liquid. Sedimentation is the deposition by settling of a suspended material.
Sediments are also transported by wind (eolian) and glaciers. Desert sand dunes and loess are examples of aeolian transport and deposition. Glacial moraine deposits and till are ice transported sediments. Simple gravitational collapse also creates sediments such as talus and mountainslide deposits as well as karst collapse features.
Seas, oceans, and lakes accumulate sediment over time. The material can be terrigenous (originating on the land) or marine (originating in the ocean). Deposited sediments are the source of sedimentary rocks, which can contain fossils of the inhabitants of the body of water that were, upon death, covered by accumulating sediment. Lake bed sediments that have not solidified into rock can be used to determine past climatic conditions.
Contents
## Sediment transport
### Rivers and streams
If a fluid, such as water, is flowing, it can carry suspended particles. The settling velocity is the minimum velocity a flow must have in order to transport, rather than deposit, sediments, and is given by Stoke's Law:
$w=\frac{(\rho_p-\rho_f)gr^2}{18\mu}$
where w is the settling velocity, ρ is density (the subscripts p and f indicate particle and fluid respectively), g is the acceleration due to gravity, r is the radius of the particle and μ is the dynamic viscosity of the fluid.
If the flow velocity is greater than the settling velocity, sediment will be transported downstream as suspended load. As there will always be a range of different particle sizes in the flow, some will have sufficiently large diameters that they settle on the river or stream bed, but still move downstream. This is known as bed load and the particles are transported via such mechanisms as saltation (jumping up into the flow, being transported a short distance then settling again), rolling and sliding. Saltation marks are often preserved in solid rocks and can be used to estimate the flow rate of the rivers that originally deposited the sediments.
#### Fluvial Bedforms
Any particle that is larger in diameter than approximately 0.7 mm will form visible topographic features on the river or stream bed. These are known as bedforms and include ripples, dunes, plane beds and antidunes. See bedforms for more detail. Again, bedforms are often preserved in sedimentary rocks and can be used to estimate the direction and magnitude of the depositing flow.
#### Key depositional environments
The major fluvial (river and stream) environments for deposition of sediments include:
1. Deltas (arguably an intermediate environment between fluvial and marine)
2. Point-bars
3. Alluvial fans
4. Braided streams
5. Oxbow lakes
6. Levees
## Shores and shallow seas
The second major environment where sediment may be suspended in a fluid is in seas and oceans. The sediment could consist of terrigenous material supplied by nearby rivers and streams or reworked marine sediment (e.g. sand). In the mid-ocean, living organisms are responsible for the sediment accumulation, their shells sinking to the ocean floor upon death.
#### Marine Bedforms
Marine environments also see the formation of bedforms, whose characteristics are influenced by the tide.
#### Key depositional environments
The major areas for deposition of sediments in the marine environment include:
1. Littoral sands (e.g. beach sands, coastal bars and spits, largely clastic with little faunal content)
2. The continental shelf (silty clays, increasing marine faunal content).
3. The shelf margin (low terrigenous supply, mostly calcareous faunal skeletons)
4. The shelf slope (much more fine-grained silts and clays)
One other depositional environment which is a mixture of fluvial and marine is the turbidite system, which is a major source of sediment to the ocean shelf and basins. |
Preprint Open Access
# Analysis and interpretation of the first CROCUS reactor neutron noise experiments using an improved point-kinetics model
A. Brighenti; S. Santandrea; I. Zmijarevic
### Citation Style Language JSON Export
{
"publisher": "Zenodo",
"DOI": "10.5281/zenodo.5817584",
"language": "eng",
"title": "Analysis and interpretation of the first CROCUS reactor neutron noise experiments using an improved point-kinetics model",
"issued": {
"date-parts": [
[
2022,
1,
4
]
]
},
"abstract": "<p>In the framework of the European project CORTEX, included in the H2020 program, a new Improved Point-Kinetics (IPK) model has been developed and validated on the neutron noise measurements recorded during the experimental campaigns carried out with the CROCUS reactor, at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. In the first part of this paper, the methodology for the experimental data analysis developed by CEA is presented and its outcomes are compared to those obtained by the EPFL team. In the second part, taking as reference the first CROCUS experimental campaign, the present work presents a series of interpretive exercises performed with the IPK noise model aiming at showing its simulation capabilities and at trying to address some of the discrepancies observed during the validation exercise. With a deeper understanding of the phenomena inside CROCUS, the following step foresees the application of the code to full reactor studies.</p>",
"author": [
{
"family": "A. Brighenti"
},
{
"family": "S. Santandrea"
},
{
"family": "I. Zmijarevic"
}
],
"type": "article",
"id": "5817584"
}
22
11
views |
# Mechanical power transmission
1. Nov 24, 2009
### CHICAGO
Hi all
Serching along this PF I found this formula to solve the power transmission between piston and crankshaft. I have then drawn this to show it.
According to this formula we have a maximum (100%) power transmission when angle theta is 90º. I think we should have some power loss even at 90º. Not because of friction or something like that, but because simple geometry.
Am I wrong? or is there another formula around?
Thanks a lot in advance.
http://img195.imageshack.us/img195/3300/pistontocrankshaft.jpg [Broken]
Last edited by a moderator: May 4, 2017
2. Nov 25, 2009
### xxChrisxx
What do you mean by power transmission?
Also where did you get that equation, because it looks... wrong somehow. I'd like to see how they derived it and that its actually showing.
Last edited: Nov 25, 2009
3. Nov 25, 2009
### CHICAGO
Hi xxChristxx
Thank you for your prompt response.
I probably did not use the correct expresion. I was simply referring to the way of solving the crankshaft Torque vs. the force on the piston.
I got this formula from this thread:
https://www.physicsforums.com/showthread.php?t=266738
Which indirectly led me to these two other links:
http://web.mit.edu/~j_martin/www/pistonphysics.bmp
https://www.physicsforums.com/attachment.php?attachmentid=16099&d=1225070096
I think that only in the case of a rod extremely larger than the crank radius we will not have significant power loss when Theta is 90º.
Is it?
Thank you in advance.
Last edited by a moderator: Apr 24, 2017
4. Nov 25, 2009
### xxChrisxx
Ahhi'm tired and I can't get my derivation to work right, I remember lumping my constants differently so my eq looks slightly different. I've got a bad feeling I've done it right now, but screwed up on the actual peice of work (which is why it looks 'wrong' ir different to me).
Someone who is acutally good at maths and can derive the eq will help you out.
5. Nov 25, 2009
### CHICAGO
No problem, xxChrisxx
I will try to derive it by myself this evening and I will let you know what I get.
Thanks anyway.
.
6. Nov 25, 2009
### Ranger Mike
math meets real world..again...you are theoretically correct with your assumption that "According to this formula we have a maximum (100%) power transmission when angle theta is 90º."
if..if...the force acting on the piston was constant from TDC to 90 degrees of crankshaft rotation..but it is not..at 90 degrees, the piston is one half of the entire stroke..the maximum force applied to the top of the piston occurs at TDC and over a few degrees crank rotation past TDC and diminishes from this max point as various pressure graphs show in another forum..look it up I think it was cylinder pressure of IC engine earlier this week??
one more huge item that was dismissed in your assumption..you got a whole bunch of parasitic drag caused by piston to skirt clearance, piston drag, piston ring drag , bearing drag and this is huge. Also the rotational and reciprocating weight of the entire power train will impact on total horsepower..
Last edited: Nov 25, 2009
7. Nov 25, 2009
### CHICAGO
.
Ranger Mike, you are absolutely right, but my question is only focused on the mechanical trasmission losses between piston force F and crankshaft torque T due only to the geometry of the system.
Finally I have derived this.
It seems the max eficiency is when theta is arctan l/r.. That would be the crank position in the second image.
I had no time to derive a straight single formula, so we first get ß and in a second step solve T
http://img4.imageshack.us/img4/240/pistonandrod20.jpg [Broken]
Thanks anyway. If you observe something wrong in this formulae, please let me know.
.
Last edited by a moderator: May 4, 2017
8. Nov 25, 2009
### Bob S
If less than 100% of the power goes into turning the crank, it has to go someplace else. Where does it go? heat loss (exhaust gases, friction)? If there is no other place, then 100% does go into turning the crank.
Bob S
9. Nov 25, 2009
### KLoux
I think it comes down to how your defining efficiency - if you consider efficiency to be power out / power in (this is usually what is meant by efficiency), then Bob is right and efficiency is not a function of position.
If you are looking at it from some other point of view (torque, velocity, etc.), then things change, for example, the applied torque results in the maximum force at the piston (along the axis) when theta = 90 (this is also when the piston is at it's maximum velocity, if the crank's angular velocity is constant).
-Kerry
10. Nov 25, 2009
### Mech_Engineer
The equations weren't looking right to me, and it turns out you misapplied the law of sines to find beta. The correct formula should be:
$$\beta=asin\left(\frac{r*sin(\theta)}{l}\right)$$
11. Nov 25, 2009
### CHICAGO
Hi, Mech
By sin-1 I mean asin
I used r=1 and I used l taking r as the unit in my equation. I will correct it for more clarification.
Thank you.
.
12. Nov 25, 2009
### Mech_Engineer
Yes, asin and sin-1 are equivalent.
You never said r=1 anywhere, so you need to leave it in the equation.
Last edited: Nov 25, 2009
13. Nov 25, 2009
### CHICAGO
Thank you, Mech_Engineer, I have already included r in the equation above.
.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook |
# My All 2020 Mathematics A to Z: Wronskian
Today’s is another topic suggested by Mr Wu, author of the Singapore Maths Tuition blog. The Wronskian is named for Józef Maria Hoëne-Wroński, a Polish mathematician, born in 1778. He served in General Tadeusz Kosciuszko’s army in the 1794 Kosciuszko Uprising. After being captured and forced to serve in the Russian army, he moved to France. He kicked around Western Europe and its mathematical and scientific circles. I’d like to say this was all creative and insightful, but, well. Wikipedia describes him trying to build a perpetual motion machine. Trying to square the circle (also impossible). Building a machine to predict the future. The St Andrews mathematical biography notes his writing a summary of “the general solution of the fifth degree [polynomial] equation”. This doesn’t exist.
Both sources, though, admit that for all that he got wrong, there were flashes of insight and brilliance in his work. The St Andrews biography particularly notes that Wronski’s tables of logarithms were well-designed. This is a hard thing to feel impressed by. But it’s hard to balance information so that it’s compact yet useful. He wrote about the Wronskian in 1812; it wouldn’t be named for him until 1882. This was 29 years after his death, but it does seem likely he’d have enjoyed having a familiar thing named for him. I suspect he wouldn’t enjoy my next paragraph, but would enjoy the fight with me about it.
# Wronskian.
The Wronskian is a thing put into Introduction to Ordinary Differential Equations courses because students must suffer in atonement for their sins. Those who fail to reform enough must go on to the Hessian, in Partial Differential Equations.
To be more precise, the Wronskian is the determinant of a matrix. The determinant you find by adding and subtracting products of the elements in a matrix together. It’s not hard, but it is tedious, and gets more tedious pretty fast as the matrix gets bigger. (In Big-O notation, it’s the order of the cube of the matrix size. This is rough, for things humans do, although not bad as algorithms go.) The matrix here is made up of a bunch of functions and their derivatives. The functions need to be ones of a single variable. The derivatives, you need first, second, third, and so on, up to one less than the number of functions you have.
If you have two functions, $f$ and $g$, you need their first derivatives, $f'$ and $g'$. If you have three functions, $f$, $g$, and $h$, you need first derivatives, $f'$, $g'$, and $h'$, as well as second derivatives, $f''$, $g''$, and $h''$. If you have $N$ functions and here I’ll call them $f_1, f_2, f_3, \cdots f_N$, you need $N-1$ derivatives, $f'_1, f''_1, f'''_1, \cdots f^{(N-1)}_1$ and so on through $f^{(N-1)}_N$. You see right away this is a fun and exciting thing to calculate. Also why in intro to differential equations you only work this out with two or three functions. Maybe four functions if the class has been really naughty.
Go through your $N$ functions and your $N-1$ derivatives and make a big square matrix. And then you go through calculating the derivative. This involves a lot of multiplying strings of these derivatives together. It’s a lot of work. But at least doing all this work gets you older.
So one will ask why do all this? Why fit it into every Intro to Ordinary Differential Equations textbook and why slip it in to classes that have enough stuff going on?
One answer is that if the Wronskian is not zero for some values of the independent variable, then the functions that went into it are linearly independent. Mathematicians learn to like sets of linearly independent functions. We can treat functions like directions in space. Linear independence assures us none of these functions are redundant, pointing a way we already can describe. (Real people see nothing wrong in having north, east, and northeast as directions. But mathematicians would like as few directions in our set as possible.) The Wronskian being zero for every value of the independent variable seems like it should tell us the functions are linearly dependent. It doesn’t, not without some more constraints on the functions.
This is fine, but who cares? And, unfortunately, in Intro it’s hard to reach a strong reason to care. To this major, the emphasis on linearly independent functions felt misplaced. It’s the sort of thing we care about in linear algebra. Or some course where we talk about vector spaces. Differential equations do lead us into vector spaces. It’s hard to find a corner of analysis that doesn’t.
Every ordinary differential equation has a secret picture. This is a vector field. One axis in the field is the independent variable of the function. The other axes are the value of the function. And maybe its derivatives, depending on how many derivatives are used in the ordinary differential equation. To solve one particular differential equation is to find one path in this field. People who just use differential equations will want to find one path.
Mathematicians tend to be fine with finding one path. But they want to find what kinds of paths there can be. Are there paths which the differential equation picks out, by making paths near it stay near? Or by making paths that run away from it? And here is the value of the Wronskian. The Wronskian tells us about the divergence of this vector field. This gives us insight to how these paths behave. It’s in the same way that knowing where high- and low-pressure systems are describes how the weather will change. The Wronskian, by way of a thing called Liouville’s Theorem that I haven’t the strength to describe today, ties in to the Hamiltonian. And the Hamiltonian we see in almost every mechanics problem of note.
You can see where the mathematics PhD, or the physicist, would find this interesting. But what about the student, who would look at the symbols evoked by those paragraphs above with reasonable horror?
And here’s the second answer for what the Wronskian is good for. It helps us solve ordinary differential equations. Like, particular ones. An ordinary differential equation will (normally) have several linearly independent solutions. If you know all but one of those solutions, it’s possible to calculate the Wronskian and, from that, the last of the independent solutions. Since a big chunk of mathematics — particularly for science or engineering — is solving differential equations you see why this is something valuable. Allow that it’s tedious. Tedious work we can automate, or give to research assistant to do.
One then asks what kind of differential equation would have all-but-one answer findable, and yield that last one only by long efforts of hard work. So let me show you an example ordinary differential equation:
$y'' + a(x) y' + b(x) y = g(x)$
Here $a(x)$, $b(x)$, and $g(x)$ are some functions that depend only on the independent variable, $x$. Don’t know what they are; don’t care. The differential equation is a lot easier of $a(x)$ and $b(x)$ are constants, but we don’t insist on that.
This equation has a close cousin, and one that’s easier to solve than the original. Is cousin is called a homogeneous equation:
$y'' + a(x) y' + b(x) y = 0$
The left-hand-side, the parts with the function $y$ that we want to find, is the same. It’s the right-hand-side that’s different, that’s a constant zero. This is what makes the new equation homogenous. This homogenous equation is easier and we can expect to find two functions, $y_1$ and $y_2$, that solve it. If $a(x)$ and $b(x)$ are constant this is even easy. Even if they’re not, if you can find one solution, the Wronskian lets you generate the second.
That’s nice for the homogenous equation. But if we care about the original, inhomogenous one? The Wronskian serves us there too. Imagine that the inhomogenous solution has any solution, which we’ll call $y_p$. (The ‘p’ stands for ‘particular’, as in “the solution for this particular $g(x)$”.) But $y_p + y_1$ also has to solve that inhomogenous differential equation. It seems startling but if you work it out, it’s so. (The key is the derivative of the sum of functions is the same as the sum of the derivative of functions.) $y_p + y_2$ also has to solve that inhomogenous differential equation. In fact, for any constants $C_1$ and $C_2$, it has to be that $y_p + C_1 y_1 + C_2 y_2$ is a solution.
I’ll skip the derivation; you have Wikipedia for that. The key is that knowing these homogenous solutions, and the Wronskian, and the original $g(x)$, will let you find the $y_p$ that you really want.
My reading is that this is more useful in proving things true about differential equations, rather than particularly solving them. It takes a lot of paper and I don’t blame anyone not wanting to do it. But it’s a wonder that it works, and so well.
Don’t make your instructor so mad you have to do the Wronskian for four functions.
This and all the others in My 2020 A-to-Z essays should be at this link. All the essays from every A-to-Z series should be at this link. Thank you for reading.
## Author: Joseph Nebus
I was born 198 years to the day after Johnny Appleseed. The differences between us do not end there. He/him.
## 4 thoughts on “My All 2020 Mathematics A to Z: Wronskian”
1. Reblogged this on Singapore Maths Tuition and commented:
Great interesting post on Wronskian, which is widely used in the study of differential equations. Wronski is an eccentric genius, who was once forced to leave his post at the Marseille Observatory after his theories were dismissed as “grandiose rubbish”, according to Wikipedia.
Like
This site uses Akismet to reduce spam. Learn how your comment data is processed. |
# What just happened to brilliant?
Hello folks. I just noticed that brilliant has changed a bit. I am more like a math guy and not much into physics(even though I'm good at it) but I do not solve any physics problems. I took the assessment in both Mechanics and Electricity & Magnetism and did not solve any problem later and now my level 5 rating in gone :(... Now if I am solving the problems there is no change in the rating and it is just returning with a "You solved it!"... What do I do..? :'(
Note by Muzaffar Ahmed
5 years, 6 months ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
We have updated ratings and levels so that they only display for those who have done several problems in the topic.
For those (like you) who had only went through the diagnostic, you will need to do several more problems to confirm your level / rating. For example, you've started doing some problems in Computer Science. If you do 2-3 more problems, your rating and level should be displayed.
Staff - 5 years, 6 months ago
I never got to take an assessment... How do I do it?
- 5 years, 2 months ago
Ohh thank you.. But I guess there aren't many problems in level 5 for computers... So will I be stuck on level 4..?
- 5 years, 6 months ago
Hi Mufazzar. Which class are you in? Your age is shown as 14 and you are level 5s in all topics. How do you do it? What are the extra resources you use?
- 5 years, 4 months ago
But you are quite good too.. Good levels.. And yeah, I am in 10th grade
- 5 years, 4 months ago
Uhmm.. Well.. My father is a math teacher and he taught me math of higher classes very early since I was 5 or 6 and so I had gained interest in learning and I just read any mathematics book I come across.. Reading helps a lot..
- 5 years, 4 months ago |
# How I can prove that the jacobian matrix of the inverse function is non zero?
Let $f:ℝ^{r+1}→ℝ^{r+1}$ be a real analytic function. Assume that the determinant of its jacobian matrix is non zero for all $x∈ℝ^{r+1}$, i.e., for every point $x∈ℝ^{r+1}$, there exists a neighborhood about $x$ over which $f$ is locally invertible.
My question is: How I can prove that the jacobian matrix of the inverse function $f⁻¹$ is also non zero for all $x∈ℝ^{r+1}$?
From the inverse function theorem we have that if $f$ is $C^1$ on some neighborhood $U$ of $\hat{x}$, and $Df(\hat{x})$ is invertible, then there exists some neighborhood $V$ of $f(\hat{x})$, and a $C^1$ function $\phi:V \to U$ such that $\phi(f(x)) = x$. Furthermore, $D \phi(f(\hat{x})) = (Df(\hat{x}))^{-1}$
If $f$ has an inverse, then we must have $f^{-1} = \phi$ on $V$, and hence $D(f^{-1})(f(\hat{x}) ) = (Df(\hat{x}))^{-1}$. Since $\det Df(\hat{x}) \neq 0$, it follows that $\frac{1}{\det Df(\hat{x})} \neq 0$, hence $\det D(f^{-1})(f(\hat{x}) ) \neq 0$. |
There are actually more trigonometric functions in existence than most of us know about. Math · Algebra II · Trigonometry. Trigonometric Ratios in Right Angle Triangle. For each point in the coordinate plane, there is one representation, but for each point in the polar plane, there are infinite representations. Angles that are in standard position are said to be quadrantal if their terminal side coincides with a coordinate axis. Figure 1 indicates a triangle with sides a, b and c and angles A, B and C respectively. Find the opposite side of the unit circle triangle. Quadrant resultant (step 3) Step 4. Since the circle is commonly divided into 360 degrees, the quadrants are named by 90-degree segments. This is the reference angle. Proceeding further, we nd that when 3ˇ 2 2ˇ, we retrace the portion of the curve in Quadrant IV that we rst traced out as ˇ 2 ˇ. jpg to represent the speed, s, in feet per second, of a toy car driving around a circular track having an angle of incline mc013-2. Positions on the celestial sphere may be specified by using a spherical polar coordinate system, defined in terms of some fundamental plane and a. I have seen students cramming the signs of Trigonometry Functions ( sinϴ, cosϴ, tanϴ, cosecϴ, secϴ, cotϴ) in four Quadrants. Solution of triangles is the term for solving the main trigonometric problem of finding the parameters of a triangle that include angle and length of the sides. Deals all about trigonometric formulas and identities. 6 Graphs of Other Trigonometric Functions 4. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. All the trig functions are positive in Quadrant 1. Linear Algebra. DeMoivre's Theorem and nth Roots. The only quadrant where sec theta is positive and sin theta is negative is Quadrant IV. Using Reference Angles to Evaluate Trigonometric Functions. Tangent of arctangent. The Right Triangle and Applications. That is the angle from 0 to 90 in any trigonometric functions gives the positive resultant. The trigonometric functions in MATLAB ® calculate standard trigonometric values in radians or degrees, hyperbolic trigonometric values in radians, and inverse variants of each function. navigation system known as Long Range Navigation (Loran) was developed between 1940 and 1943, and uses pulsed radio transmissions from so-called "master" and "slave" stations to determine a ship's position. Worksheet 2, Exercise 7. Introduction A _ coordinate system is formed by drawing. Reference angles make it possible to evaluate trigonometric functions for angles outside the first quadrant. A system of inequalities has more than one inequality statement that must be satisfied. Numbering starts from the upper right quadrant, where both coordinates are positive, and goes in an anti-clockwise direction, as in the picture. the angle fits in the coordinate system the initial side coincides with the positive x-axis the vertex is at origin. While right-angled triangle definitions permit the definition of the trigonometric functions for angles between 0 and radian (90°), the unit circle definitions allow to. Functions are mathematical language to show the relationship of two variables, most often found in college level algebra and trigonometry. Angles in standard position can be classified according to the quadrant contains their terminal sides. Quadrantal angles correspond to "integer multiples" of 90 or π. The three main functions in trigonometry are Sine, Cosine and Tangent. The ordered pairs, called polar coordinates, are in the form $$\left( {r,\theta } \right)$$, with $$r$$ being the number of units from the origin or pole (if $$r>0$$), like a radius of a circle, and $$\theta$$ being the angle (in degrees or radians) formed by the ray on the positive $$x$$ - axis (polar axis), going counter-clockwise. Then 2θ = 90° – 3θ Therefore, sin 2θ = sin (90° – 3θ) = cos 3θ or. A quadrant system is basically an X-Y graph. These just follow from the sign (+ or -) of x or y for each quadrant, as we saw above. 14) From each corner of a square of side 12 cm a quadrant of a circle of radius 3 cm is cut and also a circle of diameter 6 cm is cut as shown in the adjacent fig. To do so, we need to define the type of circle first, and then place that circle on a coordinate system. 5 Trigonometric Equations 629 Solution The period of the sine function is In the interval there are two values at which the sine function is One of these values is The sine is positive in quadrant II; thus, the other value is or This meansthat. If the terminal side of an angle in standard position lies along either the x axis or the y. sin(nπ + x) may change to + sin x or - sin x ,similarly cos (nπ + x) may change to + or - cos x depending upon the quadrant in which angle lies. All the measurement is done taking this point as the reference (or starting point) in the coordinate system. Example 1 - Finding the Height. 1 Angles and Their Measure Angles De nition 1. Download the graph paper and start graphing bars. Express as a trigonometric function of an angle in Quadrant I. This explains completely its mathematical and geometrical interpretation and physical significances. e: For basic trigonometric equations, we follow the following steps to solve them: 1. In the second quadrant, the values for sin are positive only. Provision must be made to support the plane when it is at rest on the ground and during takeoff and landing. McKeague Chapter 1. 1 Sine and Cosine in the First Quadrant Sine and Cosine Values in the First Quadrant Understand sine and cosine values on the unit circle Find exact sine and cosine values for angles in the first quadrant of the unit circle. 11 points for the best answerer (I always give the best answerer a thumbs-up. Unit Circle Trigonometry Coordinates of Quadrantal Angles and First Quadrant Special Angles First, we will draw a right triangle that is based on a 30o reference angle. These axes intersect to form four quadrants. Example 1: The following angles (standard position) terminate in the listed quadrant. Arts and Humanities. We divide the plane into four quadrants in the anticlockwise sense … Read more. There are four quadrants and in every quadrant we have different sign of the trigonometric ratios. So could be (accurate to one decimal place) equal to either or else. Learn trigonometry for free—right triangles, the unit circle, graphs, identities, and more. Attempt Parts 1 – 3 In the system of equations {2x + 6y = 5 quadrant III D) quadrant IV E) quadrant III or. Algebra Basics - Part 3. Quadrantal angles correspond to "integer multiples" of 90 or π. 4 Prove the double- and half-angle identities for sine, cosine, and tangent. In the equation above they are conveniently all together on one side of a simplified equation. shows sin is positive ( cosec is its reciprocal so it is also +ve. The ray is allowed to rotate. A shooter is typically firing 1 oz or 1 1/8 oz of lead (7 1/2 or 8 shot) downrange at anywhere from 1100 to 1300 feet per second. The ordered pairs, called polar coordinates, are in the form $$\left( {r,\theta } \right)$$, with $$r$$ being the number of units from the origin or pole (if $$r>0$$), like a radius of a circle, and $$\theta$$ being the angle (in degrees or radians) formed by the ray on the positive $$x$$ - axis (polar axis), going counter-clockwise. More specifically, trigonometry is about right-angled triangles, where one of the internal angles is 90°. The CAST rule is a special rule used often as a memorization acronym in trigonometry. Pre-Algebra. 4 The Trigonometry of Real Numbers Summary/Concept Rev, Mixed Rev, Practice Test Calc Exploration and Discovery: Signs, Quadrants and Reference Arcs SCS: Trigonometry of the Real Numbers and the Wrapping Function Cumulative Review 1 - 3 Chapter 4: Trigonometric Graphs and Models 4. This circle is known as a unit circle. Use trigonometry to determine the angle. Reference angle on the x-y plane is always between 0 to 90 degrees. Evaluating Trigonometric Functions with a Calculator. From Figure 1, the reference triangle of 210° in the third quadrant is a 30°-60°-90° triangle. Basically, any angle on the x-y plane has a reference angle, which is always between 0 and 90 degrees. It further includes mathematical investigations by means of six trigonometric functions. When we include negative values, the x and y axes divide the space up into 4 pieces:. Trigonometry Triplets ( In Hindi ) 9:25 mins. Unit circle trig. We can find the cosine and sine of any angle in any quadrant if we know the cosine or sine of its reference angle. Chpter 3 48 Fundamental of trigonometry Chapter 3. Algebra and Trigonometry 10th Edition answers to Prerequisites - P. Another area of trigonometry that is normally not taught until students take Pre-Calculus is the trigonometric identities. All Students Take Calculus is a mnemonic for the sign of each trigonometric functions in each quadrant of the plane. txt) or view presentation slides online. It turns out to be a very fun problem to solve which involves a bit of trigonometry, physics, and software engineering. Enter a problem Trigonometry Examples. Start studying Basic Trigonometric Identities. There is a mnemonic trick that can help your learners remember the signs in the quadrants. Now you can just plot the five ordered pairs in the coordinate plane. Smith and Jones, both 50% marksmen, decide to fight a duel in which. Most of the Cartesian graph papers come up with three options, 'axes with labels', 'only axes' and 'only grids'. The reference angle $$\text{ must be } 90^{\circ}$$. Quadrants I, II, III and IV (They are numbered in a counter-clockwise direction) In Quadrant I both x and y are positive,. These axes intersect to form four quadrants. You will also learn how to graph and evaluate inverse trigonometric functions. For example in the following picture, we have standard angle given in the first quadrant, 2nd quadrant, 3rd quadrant and 4th quadrant. Evaluating Trigonometric Functions with a Calculator. AP World History Notes. The coordinate plane is divided into four regions, or quadrants. Siyavula's open Mathematics Grade 12 textbook, chapter 4 on Trigonometry covering End Of Chapter Exercises. An angle in standard position is said to lie in the quadrant where its terminal side lies. The checkbox "complementary" graphically shows the co- versions of the trigonometric functions. This app provides all trigonometry. An angle can be located in the first, second, third and fourth quadrant, depending on which quadrant contains its terminal side. (Note also that sin 150° = sin 30° = 1 2. WorldCat Home About WorldCat Help. So up first is the identities that deals with (π /2 - x), angles in the first quadrant, there will be a list below which you can easily use it for reference. because (sin(x))^2 + (cos(x))^2 = 1. Comment/Request This calculation library is so good! I will recommend this site to my friends. Hope that helps! You're welcome! Let me take a look You'll be able to enter math problems once our session is over. Therefore, the side opposite angle theta is y, the value of the y-coordinate. The arctangent of x is defined as the inverse tangent function of x when x is real (x ∈ℝ ). Then look at the coordinates of the point where the line and the circle intersect. Determine how fast the volume of the conical sand is changing when the radius of the base is 3 feet, if the rate of change of the radius is 3 inches per minute. Equations Inequalities System of Equations System of Inequalities Polynomials Rationales Coordinate Geometry Complex Numbers Polar/Cartesian Functions Arithmetic & Comp. The number system that we use everyday is called the decimal system (the base is 10), but computers use the binary system (the base is 2). Lectures by Walter Lewin. quadrant if, when the angle is in standard position, the terminal side lies in that quadrant. If you are given an angle and put it into a trigonometric function, it might be positive or negative. ANALYTIC TRIGONOMETRY THE UNIT CIRCLE. Math · Algebra II · Trigonometry. For more on this see Trigonometry functions of large and negative angles. An angle is the shape formed when two rays come together. Use any method including a calculator to find basic angles. AP World History Quizzes. Determine the quadrant in which a 130° angle lies. In the second quadrant (II), sine (and cosec) are positive. Trigonometric Functions of Any Angle Think About This Earlier, in the section Values of Trigonometric Functions, we were given the value of a trigonometric ratio and we needed to find the angle. To learn CAST Rule we have to start from 4th Quadrant -- 4th Quadrant -- C ( Cos & Sec Positive. The adjacent side is x, the value of the x-coordinate. The x and y coordinates for each point along the circle may be ascertained by reading off the values on the x and y axes. This tool is constrained by quadrant curve and two axes of horizontal and vertical axes of each axis is divided into 60 sections. The three ratios are called tangent , sine and cosine. Find h for the given triangle. Many problems involve right triangles. system to sustain it in flight, tail surfaces to stabilize the wings, movable surfaces to control the attitude of the plane in flight, and a power plant to provide the thrust necessary to push the vehicle through the air. Statistics. The reader is invited to verify that plotting any range of outside the interval. Imputed values: 0 , 1 , 2 , r3/2 , ex , rad , interval , 0 , 360. In the second quadrant (II), sine (and cosec) are positive. In this Demonstration these archaic trigonometric functions are plotted and their relationships with the basic trigonometric functions sine cosin;. 8 years ago. Basic Math. Steps to solving trigonometric functions for any angle. 122, giving cos alpha=1/1. For insurance purposes. When we include negative values, the x and y axes divide the space up into 4 pieces:. This means that the angles that we're after must be in the second and fourth quadrants. Evaluating Trigonometric Functions with a Calculator. Second Quadrant: Only sin is positive in this quadrant. Trigonometric identities. If the terminal side of an angle lies along one of the axes, then that angle doesn't lie in one specific quadrant; it lies along the border. hp calculators HP 33S Solving Trigonometry Problems Note: It is very easy to forget that one angle mode is set but angles are being entered in a different mode. This is the currently selected item. Today, the use has expanded to involve rotations, orbits, waves, vibrations, etc. (As the angle is lying in 1st Quadrant ). The Cartesian coordinate system comprises two number lines, one horizontal and one vertical. ) For any angle greater than $360$, find the coterminal angle that is less than $360$. 3 Prove the addition and subtraction identities for sine, cosine, and tangent. From Figure 1, the reference triangle of 210° in the third quadrant is a 30°-60°-90° triangle. Direct Link To This Result Report Bug. Graph quadrants and trigonometric functions. Steps to solving trigonometric functions for any angle. 2nd quadrant d. Trigonometry is the branch of mathematics that deals with triangles, circles, oscillations and waves; it is absolutely crucial to much of geometry and physics. Remember that the x-value is to the right (or left) of the origin, and the y-value is above (or below) the x-axis — and use those values as lengths of the triangle's sides. Functions are mathematical language to show the relationship of two variables, most often found in college level algebra and trigonometry. function 88. Quadrant II: Generally quadrant II is formed by the x' axis and y axis. A for Ist quadrant shows All are positive. These are called domain restrictions for the inverse trig functions. He considered every triangle—planar or spherical—as being inscribed in a circle, so that each side becomes a chord (that is, a straight line that connects two points on a curve or surface, as shown by the inscribed triangle ABC in. Trigonometric ratios of arbitrary angles: Let's try to understand it by dividing the coordinate axes system (anticlockwise) manner into four quadrants - 1st, 2nd, 3rd, and 4th quadrant. , but in the hands of these specialists the quadrant was adapted and modified (either in the manner in which it was employed or through changes to various incorporated scales). Shouldn't it be decreasing from ∞ to 0 since both sine as well as cosine decrease in the 2nd. Check your own Work. Introduction to Trigonometry Angles An angle is the union of two rays meeting at a common point called the vertex. Find all pairs of polar coordinates that describe the same point as the provided polar coordinates. Exam subjects are Mathematics and General Ability Test. Polar Coordinates. Trigonometry Unit 2 Interactive Notebook Pages November 2014 (5) October 2014 (6) September 2014 (20) August 2014 (14) July 2014 (21) June 2014 (24) May 2014 (32) April 2014 (22) March 2014 (13) February 2014 (28). Search for Library Items Search for Lists Search for Contacts. Evaluating Trigonometric Functions with a Calculator. Geometric Representation of Complex Numbers. The basic concept of trigonometry is based on the repetition of the values of sine, cos and tan after 360⁰ due to their periodic nature. The modern word "sine" is derived from the Latin word sinus, which means "bay", "bosom" or "fold" is indirectly, via Indian, Persian and Arabic transmission, derived from the Greek term khordḗ "bow-string, chord". Trigonometry is an important tool for evaluating measurements of height and distance. no Terms Descriptions 1 Distance between the points AB is given by Distance formula $% & & Distance of Point A from Origin$ % & 2 Section Formula A point P(x,y) which divide the line segment AB in. That tells you which quadrant you are in or in terms of degrees that would be 0, 90 degrees, 180 degrees, 270 degrees, or 360 degrees. Instead of just x ’s, the variable terms are sin x ’s. ” It primarily dealt with angles and triangles as it pertained to navigation, astronomy, and surveying. Introduction: In this lesson, the period and frequency of basic graphs of sine and cosine will be discussed and illustrated. Using quadrants, find all solutions in the given range. shows the common angles in the first quadrant of the unit circle. The adjacent side and hypotenuse are part of the ratio for the cosine of θ. The information presented here is based on Mathematics, Vol. The quadrant determines the sign on each of the values. A stakeholder need not be directly affected by the project, for example one stakeholder could be a. Free trigonometric equation calculator - solve trigonometric equations step-by-step This website uses cookies to ensure you get the best experience. This video will give you the tools to figure out which sign it is. jpg, where mc013-3. quadrant of the xy plane, what is cos(θ)? Answer Save. sin cos tan y r x r y x T T T Notice in this representation, the acute angle is between the x-axis and a line segment from the origin to a point ,xy in the first quadrant. 2nd quadrant has an S in it. It possesses the two advantages over 'acos' and 'asin' functions that 1) its answer can range over four instead of just two quadrants and that 2) it maintains full accuracy throughout its entire angular range, whereas 'acos' encounters accuracy difficulties for angles near 0 and pi and 'asin' near pi/2 and -pi/2. Calculate the following, and express your answers as ordered triplets of values separated by commas. Today, the use has expanded to involve rotations, orbits, waves, vibrations, etc. In trigonometry, a trig function replaces the x or variable part of the quadratic formula. A standard angle whose terminal side lies on the x- or y-axis is called a quadrantal angle. Angles 3 4. Comment (optional) Email (optional) Share this result with others by using the link below. Connection with right triangle trigonometry Place a right triangle ABC in the cartesian coordinate system as shown in the figure below, with the hypotenuse emanating from the origin and lying in the first quadrant. A scatter-plot graph is divided into four quadrants due to the (0, 0) intersection point of the horizontal axis (x-axis) and vertical axis (y-axis). The remaining values for tangent, cotangent, secant, and cosecant can be calculated by using the functional relationships stated above. 3°, are in quadrant 1. The letters A, S, T, C refer to the words “all,” “sin,” “cos,” and “tan” and tell you that all trig functions are positive in Quadrant I, sin is positive in Quadrant II (and cos and tan are negative), cos is positive in Quadrant III (and sin and tan are negative), and tan is. A trig inequality is an inequality in standard form: R(x) > 0 (or < 0) that contains one or a few trig functions of the variable arc x. Trigonometry deals with the study of the relationship between angles and the sides of a triangle. See more ideas about Math classroom, Teaching math and Math projects. Trigonometry. Trigonometry can be a tough course to stay ahead in, and if you are finding this to be true, then perhaps a private tutor is in order. Download free on iTunes. Its mean "measurement of a triangle". The most fundamental measuring system is related to the mathematical constant π(pi). Each section is called a quadrant; the quadrants are numbered counterclockwise as shown in (Figure). This section covers: Plotting Points Using Polar Coordinates Polar-Rectangular Point Conversions Drawing Polar Graphs Converting Equations from Polar to Rectangular Converting Equations from Rectangular to Polar Polar Graph Points of Intersection More Practice So far, we’ve plotted points using rectangular (or Cartesian) coordinates, since the points since we are going back and forth \\(x. Use the unit circle given above to determine the values for the sine and cosine of each quadrant angle given below. In the third quadrant (III), tan (and cotan) are positive. In radian measure, the reference angle must be < π 2. Graduated houses have house sizes vary more gradually, by making the center house in the small quadrant narrower, and the center house in large quadrants wider. tan ( arctan x ) = x. In trigonometry, a trig function replaces the x or variable part of the quadratic formula. Solution sin(x) is positive in quadrant I and II and therefore the given equation has two solutions. The 3 books are Trigonometry, 5th edition by Lial, Miller, Hornsby, 1993. However, the study of trigonometry is really much more complex and comprehensive. Very few students know the concept behind it. Textbook Authors: Larson, Ron, ISBN-10: 9781337271172, ISBN-13: 978-1-33727-117-2, Publisher: Cengage Learning. The reference angle calculator is used to calculate the reference angle of any of the angle given to us. 1 The Inverse Sine, Cosine, and Tangent Functions Objectives of this Section Find the Exact Value of the Inverse. Graph quadrants and trigonometric functions. Trigonometry angles are the angles given by the ratios of the trigonometric functions. 3 lecture hours per week; 48 total contact hours. 99 a) Compute the reliability of the system assuming independent failure events for the components. The adjacent side measures 3,950 miles, and the hypotenuse is the sum of the radius and height of the satellite: 3,950 + 750 = 4,700 miles. Quadrant III contains points where both values are negative (-, -) and quadrant IV contains points where the X is positive but the Y is negative (+, -). If it is possible, we have to split the given information. Since the hypotenuse and opposite sides are known, use the Pythagorean theorem to find the remaining side. By using this website, you agree to our Cookie Policy. On the unit circle the hypotenuse is equal to one, so the sine value is always equal to y variable using trigonometric conventions. In the equation above they are conveniently all together on one side of a simplified equation. (Schaum's Outline Series) quadrant 159. Here, you can find complete preparation material that includes Mock Tests, Previous Year Solved Papers and Topic-wise Tests. Quadrant I Quadrant IV Quadrant III Quadrant II None. 414; Tan 45 degree = 1 Here 1. PART F: QUADRANTS AND QUADRANTAL ANGLES The x- and y-axes divide the xy-plane into 4 quadrants. Angle is said to be in standard position on a cartesian plane when initial arm starts from origin and lies on the positive side of x-axis and terminal arm lies within 1st, 2nd, 3rd or 4th quadrant. Understanding the question and drawing the appropriate diagram are the two most important things to be done in solving word problems in trigonometry. If 0 is measured in radians, then 0 + 27Tk, where k is any integer, is an angle coterminal with O. A radian is defined as the angle between 2 radii (radiuses) of a circle where the arc between them has length of one radius. The line segment from the Y–Axis to the point and parallel to X–Axis is called the abscissa of the point and the line segment from the X–Axis to the point and parallel to the Y–Axis is called the ordinate. Watch the best videos and ask and answer questions in 40 topics and 6 chapters in Trigonometry. Is 270 degrees considered to be in the 3rd or the 4th quadrant? by Daniel This trigonometry. A common approach in trigonometry is to use the unitary circle to represent the trigonometric functions. It uses a distance from the origin and a direction to locate a point. sin(nπ + x) may change to + sin x or - sin x ,similarly cos (nπ + x) may change to + or - cos x depending upon the quadrant in which angle lies. This circle is known as a unit circle. When the tangent of y is equal to x: Then the arctangent of x is equal to the inverse tangent function of x, which is equal to y: arctan x = tan -1 x = y. Start here if you have not studied trigonometry. The following calculator will convert angles between degrees and radians. The point's distance from the origin is always 1. Become a Wizard! Improve Understanding! Boost your Grade! —For Math & Science Test Prep, Homework. Hello, I am a mathematics teacher in Seoul Science High School in Korea. This memory device only works with the major three trig functions, Sine, Cosine and Tangent. Reference angle on the x-y plane is always between 0 to 90 degrees. Quadrantal angles. The center quadrant is your current quadrant. The information presented here is based on Mathematics, Vol. % Trigonometry Polar System and Complex Numbers All Modalities. Fractional Part of Number. These graph paper generators will produce four quadrant coordinate 5x5 grid size with number scales on the axes on a single page. To solve the equation, simply choose a number for x, the input. Given a trigonometric value and quadrant for an angle, utilizes the structure and relationships of trigonometry, including relationships in the unit circle, to identify other trigonometric values for that angle, and describes the relationship between the radian measure and the subtended arc in the circle in contextual situations. Conic Sections Trigonometry. While right-angled triangle definitions permit the definition of the trigonometric functions for angles between 0 and radian (90°), the unit circle definitions allow to. GeoGebra 3D & AR: PreCalc & Calculus Resources. Involutometry and trigonometry : seven place tables of natural functions, for every hundredth of the degree of the 90 quadrant with a complete system of conversion tables and miscellaneous mathematical tables, particulary adaptable to gear calculations: 2. A truth table is a handy little logical device that shows up not only in mathematics but also in Computer Science and Philosophy, making it an awesome interdisciplinary tool. 3 Right Triangle Trigonometry 4. In this section, you will learn how to solve word problems trigonometry step by step. 1 ★, 100,000+ downloads) → Get All Trigonometry formulas, contain 100s of formulas. The quadrant determines the sign on each of the values. They are the quarters of a circle which are created by two. This explains completely its mathematical and geometrical interpretation and physical significances. CliffsStudySolver Trigonometry. The following figure shows the signs of the trigonometric functions for the four quadrants. Plane Trigonometry is the in-depth study and applications of trigonometry including definitions, identities, inverse functions, solutions of equations, graphing, and solving triangles. Not knowing the quadrants almost always results in death for the patient. GPS (the global positioning system) would not be possible without trigonometry. AP Human Geography Quizzes. Calculate the missing side, if necessary. In trigonometry, there are three different functions that can be plotted on a Cartesian plane. Major Components of Trigonometry SIN COS TAN Triangle Area Calculator are:. sin cos tan y r x r y x θ θ θ = = = Notice in this representation, the acute angle θ is between the x-axis and a line segment from the origin to a point ()x, y in the. 2 for guidance). ANALYTIC TRIGONOMETRY THE UNIT CIRCLE. Figure 1 (a) A positive angle and (b) a negative angle. jpg and then substituted the value of mc013-6. It is also important to understand polarity when dealing with quadrants. 122, giving cos alpha=1/1. If is a Quadrant II angle with sin( ) = 5 13, and is a Quadrant III angle with tan( ) = 2, nd sin( ). Its mean "measurement of a triangle". In the third quadrant, reference ∠β. Learn exactly what happened in this chapter, scene, or section of Trigonometry: Trigonometric Functions and what it means. The coordinate system is divided into four quadrants — quadrants I, II, III, and IV --- as shown in the figure. Quadrant x-coordinate y-coordinate Ist Quadrant + + IInd quadrant - + IIIrd quadrant - - IVth quadrant + - S. Because the x and y axes are perpendicular to one another, they intersect only once, in a place called the origin. Most of the Cartesian graph papers come up with three options, 'axes with labels', 'only axes' and 'only grids'. Trigonometric ratios of angles in the first quadrant, between 0° and 90° Consider following triangles DOA and COB where their internal angles at O are 60° and 45° respectively. Determine the quadrant in which a 130° angle lies. A) quadrant I B) quadrant II C) quadrant III D) quadrant IV E) quadrant III or IV 38) cos2θ sec θ = A) tan θ B) cot θ C) sin θ D) cos θ E) csc θ 39) The period of the function y = cos 2x is A) 1 B) π 2 C) π D) 2π E) 4π 40) The figure which best represents the graph of y = sin (x) is. "Greek Trigonometry and Mensuration". 1st Quadrant: The upper right-hand corner of the graph is the first quadrant. Pre-Algebra. Reciprocal Ratios Is 270 degrees considered to be in the 3rd or the 4th quadrant? by. The six trigonometric functions can be defined as coordinate values of points on the Euclidean plane that are related to the unit circle, which is the circle of radius one centered at the origin O of this coordinate system. Considering the reference angle alpha in the first quadrant, we let `sec α = 1. A general form triangle has 6 main. The horizontal line is known as the x-axis and the vertical line is called the y-axis. An identity in trigonometry is a statement that two expressions are equal for all replacements of the variable for which each expression is defined. A NALYTIC TRIGONOMETRY is an extension of right triangle trigonometry. The trigonometry text itself introduced the student to logarithms, once the invariable accompaniment to trigonometry, but now much less important that electronic pocket calculators are available. The angle value ranges from 0-360 degrees. The quadrants are numbered in the counter clockwise direction. Quadrant I is the upper right quadrant; the others are numbered in counterclockwise order. (As the angle is lying in 1st Quadrant ). Definitions:. Trigonometry Sine, cosine, and related functions, with results in radians or degrees The trigonometric functions in MATLAB ® calculate standard trigonometric values in radians or degrees, hyperbolic trigonometric values in radians, and inverse variants of each function. For the Love of Physics - Walter Lewin - May 16, 2011 - Duration: 1:01:26. If you picture a right triangle with one side along the x -axis:. Trigonometric Identities. Newest Resources. A truth table is a handy little logical device that shows up not only in mathematics but also in Computer Science and Philosophy, making it an awesome interdisciplinary tool. Quadrant rule to solve trig equations The quadrant rule or CAST diagram is very useful to know as it saves you having to draw graphs to find a set of solutions to an equation in a given range. Draw a perpendicular line to the x-axis and label the correct parts. Measurement of angles; We need the values of ratios of angles when we want to use trigonometry in our life. So if we want to find the value of sin 1110° ,then it can be written as sin (3×360° + 30°) = sin 30° = 1/2. Trigonometry Calculator and Sin Cos Tan Calculator helps you to find the value of side, angle, and area of the Right Angled Triangle. Sullivan Algebra and Trigonometry: Section 8. Ask an Expert. asked by Kevin on May 21, 2014; Pre Calculus. % Trigonometry Polar System and Complex Numbers All Modalities. An equation involving one or more trigonometric ratios of unknown angles is called a trigonometric equation. Trigonometry is one of the most useful topics in mathematics, and these thorough, detailed worksheets will give students a solid foundation in it. Deals all about trigonometric formulas and identities. If you are given an angle and put it into a trigonometric function, it might be positive or negative. Any number of vector quantities of the same type (i. Trigonometry Trigonometry is used extensively in our daily lives. Statistics. What I have attempted to draw here is a unit circle. 4 Trigonometric Functions of General Angles 543 In general, if 0 is an angle measured in degrees, then 0 + 3600k, where k is any integer, is an angle coterminal with o. Main Schaum's Outline of Trigonometry, 4th Ed. The digits indicate the number of Klingon ships, the number of starbases and the number of stars. We often need to use the trigonometric ratios to solve such problems. jpg and then substituted the value of mc013-6. Consider a right angled triangle ABC with right angle at B. Trigonometry Sine, cosine, and related functions, with results in radians or degrees The trigonometric functions in MATLAB ® calculate standard trigonometric values in radians or degrees, hyperbolic trigonometric values in radians, and inverse variants of each function. The other two angles are 25°, 45°. Secant is the reciprocal of cosine and is positive in the first and fourth quadrant. Use trigonometric identities to find the following quantities exactly. In the fourth quadrant, the values for cos are positive only. Finally, the Sine Quadrant, the subject of this class, allows the user to perform quick, accurate trigonometric calculations; along with several other related functions. Trigonometry in the modern sense began with the Greeks. Unit circle trig. EXAMPLE 3 Figure 46 y x Figure 47 x Figure 48 y x 9TI "4 Figure 49 y x Figure 50 y x SECTION 7. The remaining values for tangent, cotangent, secant, and cosecant can be calculated by using the functional relationships stated above. Download free on Amazon. However, there is still one basic procedure that is missing from the algebra of complex numbers. ) 240° is in the third quadrant. The three ratios are called tangent , sine and cosine. A scatter-plot graph is divided into four quadrants due to the (0, 0) intersection point of the horizontal axis (x-axis) and vertical axis (y-axis). These axes intersect to form four quadrants. That is, this angle is coterminal with 60°. We "refer" the angle to a first quadrant angle with a congruent reference triangle. If tan is negative in the third quadrant, would its reciprocal ratio, cot, be positive? Stack Exchange Network Stack Exchange network consists of 175 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. no Terms Descriptions 1 Distance between the points AB is given by Distance formula $% & & Distance of Point A from Origin$ % & 2 Section Formula A point P(x,y) which divide the line segment AB in. the x x -coordinate, is. AP World History Notes. Presentation Summary : Sullivan Algebra and Trigonometry: Section 7. Nov 11, 2016 - Explore neo2123's board "trigonometry project" on Pinterest. Perfect for acing essays, tests, and quizzes, as well as for writing lesson plans. Introduction: In this lesson, the period and frequency of basic graphs of sine and cosine will be discussed and illustrated. Quadrants I, II, III and IV (They are numbered in a counter-clockwise direction) In Quadrant I both x and y are positive,. and the quadrant of the angle. Angles that are in standard position are said to be quadrantal if their terminal side coincides with a coordinate axis. The product of these coordinates, [ (-) x, (+) y ], is negative. Then 2θ = 90° – 3θ Therefore, sin 2θ = sin (90° – 3θ) = cos 3θ or. 414 and 1 are positive values. That is the angle from 0 to 90 in any trigonometric functions gives the positive resultant. Pre-Algebra. Numbering starts from the upper right quadrant, where both coordinates are positive, and goes in an anti-clockwise direction, as in the picture. 1 Introduction: The word "trigonometry" is a Greek word. The coordinate plane is divided into four regions, or quadrants. Quadrant I is the upper right quadrant; the others are numbered in counterclockwise order. So if you had a circle, and you divided it into 6 equilateral triangles, and each of those equilateral triangles you divided into 60 sections, because you have a base 60 number system, then you might end up with 360 degrees. For example, find the solution of sin 2 x – 4sin x – 1 = 0 for all angles between 0 and 360 degrees. However,I couldn't understand why tan increases from -∞ to 0 in the 2nd quadrant. Therefore, 3 3 sin300 22 y r − °= = =− 3 1 cos300 2 x r °= = 3 tan300 3 1 y x − °= = =− Question (How are the trigonometric ratios of 300º related to the trigonometric ratios of 60º? Answer The magnitudes of the trig ratios of 300º are equal to the magnitudes of the trig ratios of the related. Trigonometric functions of a real number. Four Quadrant Graph Paper (Eight or Twelve per Page) These graph paper generators will produce four. 1 Sine and Cosine in the First Quadrant Sine and Cosine Values in the First Quadrant Understand sine and cosine values on the unit circle Find exact sine and cosine values for angles in the first quadrant of the unit circle. The values of x are positive to the right of the origin and negative to the left of it, as usual, and the values of y are positive above the origin and negative below it. Learn CAS Calculator. Angles in standard position that are not quadrantal fall in one of the four quadrants, as shown in Figure 2. The Power with Zero Exponent. A general form triangle has 6 main. Problems involving positions and orientations in the sky can then be solved by using the formulae of spherical trigonometry, which apply to spherical triangles, the sides of which are great circles. Textbook solution for Trigonometry (MindTap Course List) 8th Edition Charles P. Sine and cosecant are positive in Quadrant 2, tangent and cotangent are positive in Quadrant 3, and cosine and secant are positive in Quadrant 4. That's justifiable to me since sine goes from 0 to 1 while cosine goes from 1 to 0 in the first quadrant. Also, 210° is in the third quadrant, and cosine functions in the third quadrant are negative. Frequently, especially in trigonometry, the unit circle is the circle of radius 1 centered at the origin (0, 0) in the Cartesian coordinate system in the Euclidean plane. Often in trigonometry we view the triangle in the first quadrant of a rectangular coordinate system. Procedures 1) Use a Cartesian plane of heavy black poster board, which is in the shape of a square with an x-and y-axis drawn on the board and a brightly colored terminal ray that freely swings around the quadrant. It means: In the first quadrant (I), all ratios are positive. Use right triangle trigonometry to solve applied problems Chapter 2: The Unit Circle 2. The sine function, is one of the three main trigonometry functions. On the other hand, the sign convention for angles is linked to using a right handed coordinate system for 3-D geometry. Another area of trigonometry that is normally not taught until students take Pre-Calculus is the trigonometric identities. The absolute values of the cosine and sine of an angle are the same as those of the reference angle. Trigonometry Calculator: A New Era for the Science of Triangles. Workbook (optional) with lecture notes, sample problems, and exercises so that you can study even when away from the computer. Solution 150° is in the second quadrant. The six trigonometric functions can be defined from a right triangle perspective and as functions of real numbers. Although the educational system presents numerous opportunities for students to enjoy developing new skills, excelling at sports, and practicing public speaking, it seems that nothing is working when it comes to mathematics. The letters ASTC signify which of the trigonometric functions are positive, starting in the top right 1st quadrant and moving counterclockwise through quadrants 2 to 4. If you picture a right triangle with one side along the x -axis:. There are many applications of trigonometry. Trigonometry has plenty of applications: from everyday life problems such as calculating the height or distance between objects to the satellite navigation system, astronomy, and geography. Trigonometry. In geometry graphing is done using two coordinate axes namely the x-axis and y-axis to identify the location of any point. origin quadrants rectangular coordinates Cartesian coordinates points axis. Newest Resources. The Cartesian coordinate system, also called the rectangular coordinate system, is based on a two-dimensional plane consisting of the x -axis and the y -axis. A subset of quadrant systems is graduation house systems. First, let let the vertex of an angle be at the origin — the point (0,0) — and let the initial side of that angle lie along the positive x -axis and the terminal side. Also, 210° is in the third quadrant, and cosine functions in the third quadrant are negative. Find all pairs of polar coordinates that describe the same point as the provided polar coordinates. The coordinates of point P are ( − cos 30°, sin 30°). Angles that are in standard position are said to be quadrantal if their terminal side coincides with a coordinate axis. Trigonometric Quadrant System in hindi. The trigonometric functions for the angles in the unit circle can be memorized and recalled using a set of rules. To link to this page, copy the following code to your site:. This section covers: Plotting Points Using Polar Coordinates Polar-Rectangular Point Conversions Drawing Polar Graphs Converting Equations from Polar to Rectangular Converting Equations from Rectangular to Polar Polar Graph Points of Intersection More Practice So far, we’ve plotted points using rectangular (or Cartesian) coordinates, since the points since we are going back and forth \\(x. Finally, the Sine Quadrant, the subject of this class, allows the user to perform quick, accurate trigonometric calculations; along with several other related functions. Draw the terminal side in the correct quadrant. Being able to recognize a particular angle by the quadrant its terminal side lies in and, conversely, to know which angles have their terminal sides in a particular quadrant is helpful when. Using the resultant and the angle, go ahead and express it in polar form: 94 @ 58°. The angle is the complementary angle of , so, for example, the cosine of is the same as the sine of the complementary angle. Very few students know the concept behind it. 113th CONGRESS 1st Session H. Use any method including a calculator to find basic angles. asked by Amelia on July 9, 2016; Trig. The sine and cosine values are most directly determined when the corresponding point on the unit circle falls on an axis. Application designed for beginners to learn the basics of trigonometry. Angles 3 4. The project has been designed to develop a speed control system for DC motor in all the four-quadrant. find the exact value of sin 2α if cos α=4/5 (α in quadrant I) asked by David on November 28, 2013; math. Trigonometry Calculator and Sin Cos Tan Calculator helps you to find the value of side, angle, and area of the Right Angled Triangle. Trigonometric functions of a real number. Therefore the ordered pair is ( 1 2 , 3 2 ) {\displaystyle \left({\tfrac {1}{2}},{\tfrac {\sqrt {3}}{2}}\right)} and the secant value is 1 x = 1 1 2 = {\displaystyle {\tfrac {1. We often need to use the trigonometric ratios to solve such problems. 2nd quadrant has an S in it. Learn CAS Calculator. What makes great teachers is great thinking. Raise complex numbers to powers or find their roots. (Note also that sin 150° = sin 30° = 1 2. I'll be starting precalc at uni once summer ends, and i'm sure i'll grow comfortable with trig soon enough. This ratio is the same on any size circle. 1 Sine and Cosine in the First Quadrant Sine and Cosine Values in the First Quadrant Understand sine and cosine values on the unit circle Find exact sine and cosine values for angles in the first quadrant of the unit circle. Reciprocal Ratios Is 270 degrees considered to be in the 3rd or the 4th quadrant? by. If you are given an angle and put it into a trigonometric function, it might be positive or negative. Of particular value is the technique of triangulation, which is used in astronomy to measure the distances to nearby stars, in geography to measure distances between landmarks, and in satellite navigation systems. Trigonometry. Let a line through the origin, making an angle of θ with the positive half of the x-axis, intersect the unit circle. Free trigonometric identities - list trigonometric identities by request step-by-step Equations Inequalities System of Equations System of Inequalities Polynomials Rationales Coordinate Geometry Complex Numbers Polar/Cartesian Functions Arithmetic & Comp. ) This problem gives you some practice with the components. The side which is opposite to right angle is known as hypotenuse, the side opposite to angle A is called perpendicular for angle A and the side opposite to third angle is called base for angle A. 1st quadrant. Tamarack 2nd Floor 588-5088 Columbia College Third Quadrant: The cosine and the sine of the corresponding angles in the third quadrant are computed in the same manner, except the x-coordinate is going to the left, and thus the cosine is negative, and the y-coordinate is going down. Trig ratios for angles in a right-angled triangle 2 3. Now our trigonometry functions tell us about all the points on a circle, not just the ones in the first quadrant. It possesses the two advantages over 'acos' and 'asin' functions that 1) its answer can range over four instead of just two quadrants and that 2) it maintains full accuracy throughout its entire angular range, whereas 'acos' encounters accuracy difficulties for angles near 0 and pi and 'asin' near pi/2 and -pi/2. 113th CONGRESS 1st Session H. In the first quadrant, the values for sin, cos and tan are positive. Exam subjects are Mathematics and General Ability Test. 1: Introduction to Trigonometry Angles and Quadrants We start with a discussion of angles. This app provides all trigonometry. Students use sum and difference formulas to solve trig equations. On the unit circle the hypotenuse is equal to one, so the sine value is always equal to y variable using trigonometric conventions. The functions versine coversine exsecant and excosecant are no longer taught in schools; in some European countries neither are secant cosecant and cotangent. Free Pre-Algebra, Algebra, Trigonometry, Calculus, Geometry, Statistics and Chemistry calculators step-by-step This website uses cookies to ensure you get the best experience. The Power with Negative Exponent. ) This problem gives you some practice with the components. A trigonometric equation can be written as Q 1 (sin θ, cos θ, tan θ, cot θ, sec θ, cosec θ) = Q 2 (sin θ, cos θ, tan θ, cot θ, sec θ, cosec θ), where Q 1 and Q 2 are rational functions. As the point P moves with a positive rotation round the circle it reaches the same position every 360°. For any given angle in the first quadrant, there is an angle in the second quadrant with the same sine value. 8912656 = 27. 1: Introduction to Trigonometry Angles and Quadrants We start with a discussion of angles. In mathematics, a unit circle is a circle of unit radius—that is, a radius of 1. Sin x = cos(π /2 - x) Cos x = sin(π /2 - x) Tan x = cot(π /2 - x) The other identities can be derived using the basic identities that is provided above. However, there is still one basic procedure that is missing from the algebra of complex numbers. These graph paper generators will produce four quadrant coordinate 5x5 grid size with number scales on the axes on a single page. Additional topics such as vectors, polar coordinates and parametric equations may be included. ANALYTIC TRIGONOMETRY THE UNIT CIRCLE. Third Quadrant: Only tan is positive in this quadrant. Unit Circle Trigonometry Coordinates of Quadrantal Angles and First Quadrant Special Angles First, we will draw a right triangle that is based on a 30o reference angle. 2 Problem 11PS. "Greek Trigonometry and Mensuration". Whenever the point lies in the second or third quadrant the answer is wrong, how do I correct it, when it lies in one of those quadrants?. These are often numbered from 1st to 4th and denoted by Roman numerals: I (where the signs of the (x; y) coordinates are I (+; +), II (−; +), III (−; −), and IV (+; −). 11 points for the best answerer (I always give the best answerer a thumbs-up. Quadrants we grow this way starting from 0 here so this is first quadrant second quadrant third quadrant fourth quadrant you should know that this is the positive Y axis negative Y axis positive X axis negative X axis and one consequence of that is if I have a vector sort of going this way and I want to decompose let's call this a force F and I. Area and Perimeter. I just want to check if my train of thought is correct for this problem: Find the 5 remaining trigonometric functions of θ. Favorite Answer. Chpter 3 48 Fundamental of trigonometry Chapter 3. This section covers: Plotting Points Using Polar Coordinates Polar-Rectangular Point Conversions Drawing Polar Graphs Converting Equations from Polar to Rectangular Converting Equations from Rectangular to Polar Polar Graph Points of Intersection More Practice So far, we’ve plotted points using rectangular (or Cartesian) coordinates, since the points since we are going back and forth \\(x. For the two examples above, we can combine both graphs and plot the area shared. We now extend the definitions of the trigonometric functions to any size of angle, which greatly broadens the range of applications of trigonometry. Essentially, "CAST" stands for COSINE-ALL-SINE,TANGENT. Back Trigonometry Vectors Forces Physics Contents Index Home. It is also important to understand polarity when dealing with quadrants. Quadrant I is the upper right quadrant; the others are numbered in counterclockwise order. Start studying Trigonometry Vocab. 2 Verify trigonometric identities and simplify expressions using trigonometric identities. The reference angle is always the smallest angle that you. Define quadrant. Some of the worksheets displayed are Math 6 notes the coordinate system, 3 points in the coordinate, Trig in the coordinate plane, Polygons in the coordinate plane, Whats the point, Position distance and bearing calculations, 6th grade mini unit graphing in the coordinate. In this Demonstration these archaic trigonometric functions are plotted and their relationships with the basic trigonometric functions sine cosin;. Power Quadrant System Review- Is This Really Works!! R26178 - Shaft, Shift Quadrant for John Deere | Up to 60% Simplicity 1691263 - 6516H, 16HP Hydro Parts Diagram for. To solve a trigonometric equation, we use the same procedures that we used to solve algebraic equations. 113th CONGRESS 1st Session H. This is for that trigonometry quadrant system thing. The product of these coordinates, [ (-) x, (+) y ], is negative. Practice problems - online trigonometry calculator Online trigonometry calculator problem 1: Find the third angle of the triangle. You can click here to verify link or you can just copy and paste link wherever you need it. Then look at the coordinates of the point where the line and the circle intersect. It means: In the first quadrant (I), all ratios are positive. Comparing, Ordering. Below is unit circle with just the first quadrant filled in with the “standard” angles. Find the equation of the line in form through the point Ð Cœ7B , Ñ Ð "ß\$Ñ withan angle of inclination of 0 degrees. Derive a formula for tan( + ) in terms of tan( ) and tan( ). Trigonometry Calculator and Sin Cos Tan Calculator helps you to find the value of side, angle, and area of the Right Angled Triangle. In this Demonstration these archaic trigonometric functions are plotted and their relationships with the basic trigonometric functions sine cosin;. ver ex Angle in standard position terminal side Quadrant 11 initial side Quadrant 111 Quadrant I Quadrant IV An angle in standard position whose terminal side coincides with the x-axis or. Indeed, the sine and cosine functions are very closely related, as we shall see (if you are not familiar with the sine function, you may wish to read the page entitled "The Sine Functio. These processes all involve trigonometry. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. By using this website, you agree to our Cookie Policy. Online quadrants coordinate calculator to calculate on which place quadrant falls on the graph. For every section of trigonometry with limited inputs in function, we use inverse trigonometric function formula to solve various types of problems. Not only was the quadrant useful in the work of the astronomer et al. 748 Foundations of Trigonometry 15. NEW DEFINITION OF T-RATIOS : By using rectangular coordinates the definitions of trigonometric functions can be extended to angles of any size in the following way (see diagram). Trigonometry Calculator. It is shown below. Fourth Quadrant: Only cos is positive in this quadrant. The coordinate plane is divided into four regions, or quadrants. I'll show you how to get the first quadrant unit circle values from your special triangles, a key skill that will make the other four quadrants a lot easier to understand and memorize. Institutional users may customize the scope and sequence to meet curricular needs. Observe the quadrant where the terminal side of the original angle is located. Trigonometry deals with the study of the relationship between angles and the sides of a triangle. Another way of putting it is: "a radian is the angle subtended by an arc of length r (the radius)". Unit Circle Academic Achievement Center For additional help with Algebra, make an appointment with an AAC tutor. 237 5804300. Calculate the following, and express your answers as ordered triplets of values separated by commas. Showing top 8 worksheets in the category - Finding Quadrant Based On Coordinates. In Exercises1-4, convert the angles into the DMS system. Published on Aug 19, 2016. Practice: Unit circle (with radians) Next lesson. Visit Mathway on the web. %&Þ Cœ B Solve the system of equations: "" && BœC 'C "# %' Cœ7B , Ñ Ð ß Ñ. McKeague Chapter 1. That is the first or our lessons on trigonometry for www. 774 Foundations of Trigonometry Example 10. But they also have very useful definitions using the coordinates of points on a graph. sin cos tan y r x r y x T T T Notice in this representation, the acute angle is between the x-axis and a line segment from the origin to a point ,xy in the first quadrant. Therefore, 3 3 sin300 22 y r − °= = =− 3 1 cos300 2 x r °= = 3 tan300 3 1 y x − °= = =− Question (How are the trigonometric ratios of 300º related to the trigonometric ratios of 60º? Answer The magnitudes of the trig ratios of 300º are equal to the magnitudes of the trig ratios of the related. Then,sin cos−1 3 5 =sinθ. Some would say, memorizing times table and remembering the solutions can form the part of mental mathematics. Trigonometry in the Cartesian Plane *Trigonometry comes from the Greek word meaning “measurement of triangles. The following figure shows the signs of the trigonometric functions for the four quadrants. Quandrant 1 - 0˚ All Students Take Calculus, All Silver Tea Cups, Add Sugar To Coffee are few mnemonics in mathematics that is used to memorize the sign values of all the trigonometric functions in the 2-dimensional Cartesian coordinate system. The letters ASTC signify which of the trigonometric functions are positive, starting in the top right 1st quadrant and moving counterclockwise through quadrants 2 to 4. |
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
### Subjects
Researchers are developing a strategy that could put an end to polio forever.
In 1988, scientists around the world launched a massive effort to eliminate polio, a disease that can cripple and kill. The Global Polio Eradication Initiative (GPEI) has since made great progress: the number of polio cases has fallen by more than 99%, from an estimated 350,000 cases in 1988 to around 400 in 2013. And in January, India, once a stronghold for polio, celebrated an important milestone: three years with no new cases. Yet poliovirus stubbornly persists in Nigeria, Afghanistan and Pakistan, where violence, politics and mistrust have hampered eradication efforts. Indeed, in early 2014, Kabul saw its first case of polio since 2001. In 2012, the GPEI issued a dire warning: “Polio eradication is at a tipping point. If immunity is not raised in the three remaining countries to levels necessary to stop poliovirus transmission, polio eradication will fail.”
Female vaccinators — often the only ones allowed to speak to mothers or enter a child's home — wait outside a house in Afghanistan. Credit: Agron Dragaj
In chess, the final moves must be carefully planned, as one mistake can let your opponent gain the upper hand. It's the same with the polio endgame. Violence has made delivery of the vaccine nearly impossible in some regions. In others, fear and mistrust have led parents to refuse to have their children vaccinated. But there is another, seldom discussed, obstacle to eradication: in rare cases, the live, attenuated (weakened) virus in the oral polio vaccine (OPV) can mutate and spark polio outbreaks.
In April 2013, the GPEI presented a new strategy to wipe out polio — not only the wild virus, but also polioviruses derived from OPV. The plan is to introduce inactivated polio vaccine (IPV), which contains killed virus, in the 124 countries that rely on OPV by 2015. A more effective oral vaccine will then be used to eliminate the last pockets of virus. Once the world is free of polio, the oral vaccine can be phased out entirely. Introducing IPV in so many countries will pose a “major challenge”, says Elizabeth Miller, an epidemiologist who chairs the polio working group of the Strategic Advisory Group of Experts (SAGE) on Immunization. “On the other hand, it offers huge rewards in terms of progress towards eradication.”
Sabin vs Salk: the rematch
Poliovirus replicates in the human gut and spreads through sneezes or coughs, or when someone comes into contact with infected faeces. Most people who contract polio develop only mild symptoms, if any. But in roughly 1 out of 200 infected individuals, the virus invades the nervous system and causes permanent paralysis. If the muscles that control breathing are paralysed, the disease can be fatal.
The fight against polio hinges on the two vaccines, IPV and OPV. IPV, an injectable vaccine invented by Jonas Salk and introduced in 1955, contains virus that has been bathed in a formaldehyde solution; this killed virus cannot replicate or cause paralysis. OPV, developed by Albert Sabin and approved in 1961, contains virus that has been weakened by growing it in monkey kidney cells. This live virus, delivered as oral drops, can replicate in the guts of vaccinated children for several weeks and spread — still weakened — through their faeces to unvaccinated children, allowing immunity to travel through the community. Because it is cheap and easy to administer, OPV has become the polio vaccine of choice, especially in developing countries.
But OPV has a major drawback: the live viruses in the vaccine can mutate, regaining their deadly characteristics. Roughly one in every 2.7 million children who receive OPV will become paralysed. In those regions where large swaths of the population remain unvaccinated, vaccine-derived polioviruses can regain their ability to circulate and cause outbreaks. “There have been quite a few vaccine-derived poliovirus outbreaks in the past few years,” says Nicholas Grassly, who heads the vaccine epidemiology research group at Imperial College London. One recent study1 estimates that vaccine-derived virus infected 700,000 people between 2005 and 2011, although only a small number of these would have developed paralysis. And these viruses continue to circulate. In 2013, about 60 people were paralysed as a result of circulating vaccine-derived poliovirus, most in a remote region in Pakistan. Eliminating vaccine-derived polio will require an end to the use of OPV. The GPEI advocates not a sudden withdrawal but rather a phased removal of Sabin's vaccine.
All polioviruses fall into one of three groups, or serotypes, and standard OPV contains a weakened version of all three. Types 1 and 3 circulate worldwide, but type 2 wild virus hasn't been seen since 1999 — and viruses derived from the type 2 Sabin strain account for most vaccine-related polio outbreaks. Type 2 is therefore the first to be eliminated from the vaccine. “The continued use of type 2 in the trivalent oral polio vaccine is causing more problems than it is preventing,” Miller says.
However, eliminating type 2 from OPV will leave children vulnerable to type 2 vaccine-derived infection. So before making the switch to bivalent OPV — which contains only type 1 and type 3 virus — the plan is to introduce a single dose of IPV, which protects against all three types. Miller says the combination “would protect the population should there be an emergence of a type 2 vaccine-derived strain.”
“This is an elegant strategy,” says Bruce Aylward, who has led the polio eradication programme at the World Health Organization for the past 15 years.
At the sharp end
Elegance does not necessarily translate into ease or economy of implementation, however. OPV typically costs just US$0.14 per dose. But producing IPV requires more virus, and, because it is produced from virulent wild strains, its production demands expensive biosafety measures. It is therefore significantly more expensive. “The best price you can get for IPV is between US$2 and US$3 a dose,” says Stephen Cochi, senior adviser at the US Centers for Disease Control and Prevention's Global Immunization Division in Atlanta, Georgia. The cost of IPV will fall as demand grows — Miller and Aylward hope the poorest countries will be able to secure the vaccine for about US$1 a dose — but it is likely to remain more expensive than OPV. And funds used to purchase IPV won't be available to buy other vaccines for diseases far more common than polio. “A country like Uruguay will have many cases per year of diseases such as pneumonia, meningitis and hepatitis, but may go for 20–30 years without a single vaccine-related case of polio,” says Ciro de Quadros, executive vice-president of the Sabin Vaccine Institute in Washington, DC. Such lopsided statistics, he says, will influence investment priorities.
A dissolving microneedle patch on a finger. Credit: Jeong-Woo Lee
Administering IPV is more problematic, too. Unlike oral drops, injections require trained professionals and sterile syringes. There's also the thorny question of acceptance. In parts of Pakistan and Nigeria, some people are already suspicious of the vaccine. Now health officials must convince parents that their children need not one but two different kinds of polio vaccine.
Researchers are working to help health officials overcome these barriers. Aylward and his colleagues are investigating cost-cutting measures, such as using adjuvants to reduce the amount of virus needed. They have also found that the dose can be reduced if the vaccine is injected under the skin instead of into the muscle2. Mark Prausnitz, a chemical engineer at the Georgia Institute of Technology in Atlanta, is working on a version of IPV that could be applied like a band-aid, eliminating the need for syringes and trained medical professionals. The patch contains 100 microneedles, each less than a millimetre long, affixed to a flexible pad smaller than a postage stamp. When the patch is in place, the needles puncture the skin and dissolve in 5–10 minutes, releasing the inactivated virus. Prausnitz recently tested the polio patch in rhesus macaques and found that it raises an immune response just as effectively as the standard injectable vaccine. He and his colleagues are seeking funding to conduct a clinical trial of the patch.
The challenges posed by IPV extend beyond economics and logistics. IPV is good at protecting against paralysis, but it doesn't evoke a strong immune response in the gut, where polio replicates, so people who are vaccinated with IPV can still spread the virus. Israel made the switch from OPV to IPV in 2005. The country hasn't had a case of paralytic polio since 1988, but authorities continue to monitor the country's sewage for signs of the virus. In the spring of 2013, they found it in the sewers of Rahat in southern Israel. By August the virus had been detected in 91 sewage samples from 27 sites in southern and central Israel, and in faecal samples from 42 people in those regions. “It spread throughout Israel and to the West Bank and Gaza,” Cochi says. Because roughly 94% of children in Israel have been vaccinated, not a single child developed the disease. But the continued circulation of the virus puts other countries with lower rates of vaccination at risk.
It is theoretically possible that, once the world begins using bivalent OPV, type 2 vaccine-derived outbreaks could emerge and undergo a similar silent spread because children who had received IPV would probably not develop paralysis3. “In Israel they have the most intensive environmental sampling — looking for poliovirus and other pathogens — in the world,” Cochi says. But in regions without intensive sampling, the virus could go undetected much longer.
Aylward considers that scenario unlikely, however. He and Grassly recently developed a model to examine the risk and found that, under most conditions, IPV will hasten the virus's demise3. But when vaccine coverage is high and the virus has a high reproductive rate, “you could see the situation you see in Israel: persistent circulation,” Aylward says. Fortunately, he adds, “not many settings mimic that environment.” What's more, the GPEI aims to introduce only a single dose of IPV, so children will have some immunity but won't be fully protected, making it less likely that an outbreak will go undetected. Aylward acknowledges the potential risk of this approach, but he emphasizes the perils of continuing to use an oral vaccine containing all three strains of virus. “The error,” he says, “is thinking the current situation is a safe one.”
The GPEI's optimistic timeline sees the world certified polio-free in 2018, a result that would allow the complete withdrawal of the oral vaccine. But even if the many obstacles to introducing IPV can be overcome, it may be difficult to introduce it into enough countries in 2015 to prepare for a coordinated withdrawal of type 2 vaccine in 2016. As de Quadros points out, such a rapid rollout would be unprecedented. “The way they are planning the introduction is very ambitious,” he says. “It will be interesting to observe what will happen.”
That's the problem with endgames. Even with only a few pieces left on the board, it's still not entirely clear how to win the game.
## References
1. Burns, C. C. et al. J. Virol. 87, 4907–4922 (2013).
2. Resik, S. et al. N. Engl. J. Med. 368, 416–424 (2013).
3. Mangal, T. D., Aylward, R. B. & Grassly, N. C. Am. J. Epidemiol. 178, 1579–1587 (2013).
Authors
## Rights and permissions
Reprints and Permissions
Willyard, C. Polio: The eradication endgame. Nature 507, S14–S15 (2014). https://doi.org/10.1038/507S14a
• Published:
• Issue Date:
• DOI: https://doi.org/10.1038/507S14a |
Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > KEYWORD > INFORMATION THEORETIC SECURITY:
Reports tagged with Information Theoretic Security:
TR16-078 | 9th May 2016
Gregory Valiant, Paul Valiant
#### Information Theoretically Secure Databases
We introduce the notion of a database system that is information theoretically "secure in between accesses"--a database system with the properties that 1) users can efficiently access their data, and 2) while a user is not accessing their data, the user's information is information theoretically secure to malicious agents, provided ... more >>>
TR17-038 | 23rd February 2017
Benny Applebaum, Barak Arkis, Pavel Raykov, Prashant Nalini Vasudevan
#### Conditional Disclosure of Secrets: Amplification, Closure, Amortization, Lower-bounds, and Separations
Revisions: 1
In the \emph{conditional disclosure of secrets} problem (Gertner et al., J. Comput. Syst. Sci., 2000) Alice and Bob, who hold inputs $x$ and $y$ respectively, wish to release a common secret $s$ to Carol (who knows both $x$ and $y$) if only if the input $(x,y)$ satisfies some predefined predicate ... more >>>
TR18-033 | 16th February 2018
Benny Applebaum, Thomas Holenstein, Manoj Mishra, Ofer Shayevitz
#### The Communication Complexity of Private Simultaneous Messages, Revisited
Revisions: 2
Private Simultaneous Message (PSM) protocols were introduced by Feige, Kilian and Naor (STOC '94) as a minimal non-interactive model for information-theoretic three-party secure computation. While it is known that every function $f:\{0,1\}^k\times \{0,1\}^k \rightarrow \{0,1\}$ admits a PSM protocol with exponential communication of $2^{k/2}$ (Beimel et al., TCC '14), the ... more >>>
TR18-208 | 27th November 2018
Benny Applebaum, Prashant Nalini Vasudevan
#### Placing Conditional Disclosure of Secrets in the Communication Complexity Universe
Revisions: 2
In the *Conditional Disclosure of Secrets* (CDS) problem (Gertner et al., J. Comput. Syst. Sci., 2000) Alice and Bob, who hold $n$-bit inputs $x$ and $y$ respectively, wish to release a common secret $z$ to Carol (who knows both $x$ and $y$) if and only if the input $(x,y)$ satisfies ... more >>>
ISSN 1433-8092 | Imprint |
## Saturday, September 1, 2012
### Segmentation Algorithms in scikits-image
Recently some segmentation and superpixel algorithms I implemented were merged into scikits-image. You can see the example here.
I reimplemented Felzenszwalb's fast graph based method, quickshift and SLIC.
The goal was to have easy access to some successful methods to make comparison easier and encourage experimenting with the algorithms.
Here is a a comparison of my implementations against the original implementations on Lena (downscaled by a factor of 2). The first row is my implementation, the second the original.
For the comparison, I used my python bindings of vl_feat's quickshift, my SLIC bindings and used the executable provided for Felzsenzwalb's method.
In general, I think this looks quite good. The biggest visual difference is for SLIC, where my implementation clearly does not a s good as the original one. I am quite sure this is a matter of using the right color space transform.
For the fast graph based approach, the result looks qualitatively similar, but is actually different. The reason is that I implemented a "per channel" approach, as advocated in the paper: Segment each RGB channel separately, then combine the segments using intersection.
Reading the code later on, I saw the algorithm that is actually implemented works directly on the RGB image - which is what I first implemented, but then revised after reading the paper :-/
It should be fairly easy to change my implementation to fit the original one (and not do what is said in the paper).
Here are some timing comparisons - they are just on this image, so to be taken with a grain of salt. But I think the general message is clear:
Fast Graph Based SLIC Quickshift mine 910ms 589ms 5470ms original 166ms 234ms 5130ms
So the original implementation of the Fast Graph Based approach is much faster than mine, though as said above, it implements a different approach. I would expect a speedup of roughly 3x by using theirs, which would make my code still half as slow. For SLIC, my code is also about half as slow, while for Quickshift it is insignificantly faster somewhat slower. I am a bit disappointed with the outcome, but I think this is at least a reasonable place to start for further improvements. My first priority would be to qualitatively match the performance of SLIC.
While working on the code, I noticed that using the correct colorspace (Lab) is really crucial for this algorithm to work. For quickshift, it did not make much of a difference. One problem here is that the quickshift code, the SLIC code and scikits-image all use different transformations from RGB to Lab.
I will have to play with these to get a feeling on how much they influence the outcome. As the code is now included in scikits-image, you can find it in our (I recently gained commit rights :) github repository.
During this quite long project, I really came to love Cython, which I used to write all of the algorithms. The workflow from a standard Python program for testing to a implementation with C-speed is seamless! The differences in speed between my and the original implementation is quite certainly due to some algorithmic issues, rather than that "Python is slow". The C-Code generated by Cython is really straight-forward and fast.
I want to point out the
cython -a
command, as it took me some time to discover it. It is simply brilliant. It gives a html output of your cython code, highlighting lines in which Python API is called (and that are therefore slow).
If you want to implement an algorithm from scratch for Python, and it must be fast, definitely use Cython!
That's all :)
1. Hey Andy,
As you might have noticed I've been messing with your scikit-image SLIC code quite a lot recently. =) I hadn't realised the differences in the superpixel shapes between your implementation and the original. Do you have any ideas about where the differences come from? I'll be happy to keep working on making this a reference implementation of SLIC! =)
1. Sorry, I didn't know you were working on it. That's great. I have no idea where the difference comes from. I has been on my todo for a long time but so are many other things :-/
First I thought it was the way that the Lab conversion is calculated, but I don't think this is the case.
It seems somehow to have to do with the scaling of the compactness parameter. In the reference implementation, the compactness parameter is pretty robust wrt image size. In my implementation, it is not :-/ |
139 views
If $1,-2,3$ are the eigen values of the matrix $A$ then ratio of determinant of $B$ to the trace of $B$ is_______where $B=[adj(A)-A-A^{-1}-A^{2}]$
retagged | 139 views
0
is it -3:2?
0
+2
Apply, $AdjA = \frac{|A|}{A}$ and $A^{-1} = \frac{1}{A}$
Then for each value of $A$ you will get $B = -7, \frac{1}{2}, \frac{-41}{3}$
Computing Determinanat = 47.6
Computing Trace = -20.17
Hence by dividing you will get = -2.3
0
@ashwin can you please elaborate how you are calculating B,it would be of great help to me...little dense in linear algebra :p
+1
by Boss (23.5k points) |
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 18 Jan 2020, 11:08
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Brian is analyzing the performance of certain stocks. He finds that
Author Message
TAGS:
### Hide Tags
Manager
Joined: 23 Jan 2011
Posts: 94
Brian is analyzing the performance of certain stocks. He finds that [#permalink]
### Show Tags
14 Jul 2011, 12:09
1
14
00:00
Difficulty:
95% (hard)
Question Stats:
21% (02:45) correct 79% (03:13) wrong based on 85 sessions
### HideShow timer Statistics
Brian is analyzing the performance of certain stocks. He finds that for the first fifteen days of a 30 day period, the daily closing price per share of Phonexpharma Inc. conformed to the function $$f(x)=0.4x+2$$ , where x represents the day in the period. He found that for the last 15 days in the same period, the stock followed the function $$f(x)=(-1/7)x+9$$, where x represents the day in the period. Approximately what is the median closing price per share of Phonexpharma Inc. for the entire 30 day period?
4.7
5.55
6.71
8
11.1
Is it possible to get such questions on the actual exam If so, what is the best way to go about it?
Manager
Joined: 02 Feb 2016
Posts: 85
GMAT 1: 690 Q43 V41
Re: Brian is analyzing the performance of certain stocks. He finds that [#permalink]
### Show Tags
06 Sep 2017, 13:47
5
2
If I am not mistaken, I don't think we need to make any guesses once we have calculated the highest and the lowest values from each of the first and last 15 days.
Highest (First 15 days) = 8
Smallest (First 15 days) = 2.4
Highest (Last 15 days) = 6.72
Lowest (Last 15 days) = 4.72
Median is the middle value of the set when all values are arranged in the ascending order. We know 4 values from the set of 30 and arranging them would look something like:
2.4 ---- 4.72 -- Median -- 6.72 ---- 8
Median is approximately between 6.72 and 4.72. Only Option B - '5.55' satisfies that.
##### General Discussion
Senior Manager
Joined: 28 Jun 2009
Posts: 348
Location: United States (MA)
Re: Brian is analyzing the performance of certain stocks. He finds that [#permalink]
### Show Tags
14 Jul 2011, 12:38
For first 15 days: F1
For last 15 days: F2
Median is (15th day value + 16th day value)/2
15th day value : substitute x=15 in F1
16th day value : substitute x=1 in F2
Manager
Joined: 23 Jan 2011
Posts: 94
Re: Brian is analyzing the performance of certain stocks. He finds that [#permalink]
### Show Tags
14 Jul 2011, 13:04
piyatiwari wrote:
For first 15 days: F1
For last 15 days: F2
Median is (15th day value + 16th day value)/2
15th day value : substitute x=15 in F1
16th day value : substitute x=1 in F2
I do not think that will provide the right answer. I get ~ 8.5 as the median using your inputs. However, it is 5.5.
Guess the imp thing is that the median is not value on 15/16th day value but 15/16 values in ascending order.
Intern
Joined: 20 Jan 2011
Posts: 42
Re: Brian is analyzing the performance of certain stocks. He finds that [#permalink]
### Show Tags
14 Jul 2011, 13:17
2
1
First 15 days: F1, x= 1 to 15 and f(x)= 2.4 to 8
Last 15 days: F2, x= 16 to 30 and f(x) = ~4.7 to ~6.7 (30th to 16th)
Median is average of 15th and 16th value when all of the above values are arranged in order
Do not try to calculate all the values. Go for the choices.
11.1 - outside the range of values
8- last value
6.71 - last value for the F2, so already we have 15 values less than 6.71. Additionally F1 has some values less.
4.7 - First value for the F2, so already we have 15 values greater than 4.7. Additionally F1 has some values greater.
Hence 5.55 must be the answer.
If we had very close answer choices, it would have been very calculation- intensive problem.
Manager
Joined: 18 Oct 2010
Posts: 61
Re: Brian is analyzing the performance of certain stocks. He finds that [#permalink]
### Show Tags
15 Jul 2011, 18:34
as i understood, the median of 30 days is the (fifteenth day + the sixteenth day )/2
and i dont get the answer you gave . i found 7.5
Manager
Joined: 18 Oct 2010
Posts: 61
Re: Brian is analyzing the performance of certain stocks. He finds that [#permalink]
### Show Tags
15 Jul 2011, 18:39
ohh right! from 1 to 15 days, the biggest number is 8
from 16 to 30 then smallest number is 4.27
so the median is 6.1 and 6.1 is between 5.5 to 6.5
Manager
Joined: 14 Apr 2011
Posts: 162
Re: Brian is analyzing the performance of certain stocks. He finds that [#permalink]
### Show Tags
16 Jul 2011, 02:28
good question. Thanks Hellishbrain for sharing the solution! so, for median calc in such questions, we get the range from each function first and then eliminate answer choices. if cant eliminate, then we have to make a guess and move on since it looks very calculation intensive to find out the 15-16 values in the order.
Director
Joined: 27 May 2012
Posts: 945
Re: Brian is analyzing the performance of certain stocks. He finds that [#permalink]
### Show Tags
07 Dec 2017, 07:01
TheMastermind wrote:
If I am not mistaken, I don't think we need to make any guesses once we have calculated the highest and the lowest values from each of the first and last 15 days.
Highest (First 15 days) = 8
Smallest (First 15 days) = 2.4
Highest (Last 15 days) = 6.72
Lowest (Last 15 days) = 4.72
Median is the middle value of the set when all values are arranged in the ascending order. We know 4 values from the set of 30 and arranging them would look something like:
2.4 ---- 4.72 -- Median -- 6.72 ---- 8
Median is approximately between 6.72 and 4.72. Only Option B - '5.55' satisfies that.
Thank you , this method was very helpful .
_________________
- Stne
Non-Human User
Joined: 09 Sep 2013
Posts: 13977
Re: Brian is analyzing the performance of certain stocks. He finds that [#permalink]
### Show Tags
18 May 2019, 23:29
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: Brian is analyzing the performance of certain stocks. He finds that [#permalink] 18 May 2019, 23:29
Display posts from previous: Sort by |
# Tag Info
12
Compile this document via pdflatex: \documentclass[tikz]{standalone} \pdfcompresslevel=0 \pdfobjcompresslevel=0 \begin{document} \begin{tikzpicture} \draw[line width=0.01pt] (0,0) rectangle (1cm,1cm); \end{tikzpicture} \end{document} Then search w directives in the PDF (a simple text file). You find: 0.3985 w 0.00995 w The first occurrence is ...
8
It's not so hard with TikZ, it just requires some learning and working with it. Here is code for you to start, inspired by Gonzalo, follow the Link given by Alan to learn more. \documentclass{article} \usepackage{tikz} \usetikzlibrary{decorations.markings} \begin{document} \begin{tikzpicture}[decoration={markings, mark=at position 1cm with ...
7
You could simply protect the \par. The error goes away if I do this at both places in the definition of \EndMark: \addtocontents{toc}{\protect\label{en\thechapmark}% \protect\par\protect\begin{tikzpicture}[overlay,remember picture,baseline] \protect\node [anchor=base] (e\thechapmark) {}; ...
7
You don't have to place the pspicture environment to use pstricks. Instead you should use node-connections and specify the connection type after the equation: \documentclass{article} \usepackage{mathtools,lipsum} \usepackage{pstricks,pstricks-add} \psset{linewidth=.4pt} \begin{document} \lipsum[1] \vspace{2\baselineskip} ...
6
I don't know what it should look like (I'm colorblind so everything looks OK to me) but here is the second one. Ifyour viewer can handle it play with the blend mode parameter for different effects. \documentclass[tikz]{standalone} \begin{document} \begin{tikzpicture}[mys/.style={pink!80,fill opacity=0.5,draw=black }] \begin{scope}[transparency group] ...
6
An alternative would be to use the tikzmark library: \documentclass{article} \usepackage{fontspec,kantlipsum,tikz} \usetikzlibrary{tikzmark,calc,decorations.pathreplacing} \begin{document} \kant[1] \vspace*{10ex} \sum_{i,\,j,\,m,\,k} \!\!\!\! \left \langle \tikzmark{Ci} C_i \tikzmark{Cj} C_j \tikzmark{Cm} C_m \tikzmark{Ck} C_k ...
6
Replace all the \addplot by \foreach \a in {-2.4,-1.8,...,2.4}{ \addplot [domain=-5:5, samples=100, color=cyan]{\a*x^2}; } \foreach \a in {-2.1,-1.5,...,2.4}{ \addplot [domain=-5:5, samples=100, color=red,dashed]{\a*x^2}; } Code \begin{tikzpicture} \begin{axis}[grid=major, xmin=-5, xmax=5, ymin=-30, ymax=30, xlabel=$t$, ylabel=$y$]; \foreach \a in ...
5
Drawing a colored box is my catchword for offering a tcolorbox as solution. The exact white space dimension can be set with before skip and after skip. Since you want it to be the same, you can also use beforeafter skip. I use two boxes below: The first one is the normal one, the second one just to show the distance settings. ...
5
5
As the comment of percusse and cfr says, you can't use options for tikz library. And you can read in the manual that you can replace the curly braces with square brackets (ConTeXt specific). Here is an example how you can set the option by using .is choice handler. \documentclass[border=1cm]{standalone} \usepackage{tikz} \usetikzlibrary{calc} % ----------- ...
5
Next time, please provide a Minimum Working Example so that people don't have to copy every word from an image but can cut-and-paste at least the basic structure of the document and textual content of the diagram. This solution uses forest and constructs and adds the labels at the beginning of the nodes automatically. For this, I got very fast, accurate ...
4
Here is a conceptual way of doing it. I think there are a lot of things to improve (less macros because you only need the last two items in the color description, proper expansion control, possibility of custom macro name) but I didn't have time. I think you can take it from here. \documentclass[varwidth,border=50]{standalone} \usepackage{tikz} ...
4
Something like this? \documentclass[tikz, border=10pt]{standalone} \usetikzlibrary{shadings,calc,patterns} \begin{document} \begin{tikzpicture} \fill [left color=gray!50!black, right color=gray!50!black, middle color=gray!50, shading=axis, opacity=0.25] (2,0) coordinate (a) -- (2,6) coordinate (b) arc (360:180:2cm and 0.5cm) ...
4
Just for fun without PSTricks. \documentclass[tikz,12pt,dvipsnames,border=0cm]{standalone} \usetikzlibrary{patterns} \def\M{10}% columns \def\N{10}% rows \def\scale{1}% scale \def\filename{example-image-a}% filename \def\mygrid{% \draw[help lines,red,step=.1,ForestGreen!50](-\M,-\N) grid (\M,\N); \draw[help lines,red,step=1](-\M,-\N) grid ...
4
\documentclass{article} \usepackage{tikz} \usepackage{mwe} \begin{document} \begin{tikzpicture} \node {\includegraphics[]{example-image}}; \end{tikzpicture} \begin{tikzpicture} \begin{scope} \clip (2,0) rectangle (5cm,8cm); \node[anchor=south west] {\includegraphics[]{example-image}}; \end{scope} \end{tikzpicture} \end{document} ...
4
Here is an example using path picture. You can fill with imported image or with tikz image. \documentclass[varwidth,border=50]{standalone} \usepackage{tikz} \usepackage{mwe} \tikzset{ path image/.style={ path picture={ \node at (path picture bounding box.center) { \includegraphics[height=3cm]{example-image}};}}, path tikzimage/.style={ ...
4
I would suggest using the background package and a little trickery. The following defines a new command \installbackgrounds[]{}. The first argument is optional and sets the total number of pages in the file of backgrounds. The second, mandatory argument specifies the file. If no total is specified, the number defaults to 1. \documentclass[a4paper]{report} ...
3
\spy uses the coordinate system in the scope of the spy using outlines option. To use your special coordinate system for spying your picture, you may define named coordinates. \documentclass{standalone} \usepackage{tikz} \usetikzlibrary{spy} \begin{document} \begin{tikzpicture}[spy using outlines={circle,red,magnification=3,size=2.5cm, connect spies}] ...
3
With TikZ 3.0 arrives math library. This library defines a simple mathematical language to define simple functions and perform sequences of basic mathematical operations. Here is a code from the manual (p.629), slightly modified to include function use. \documentclass[varwidth,border=50]{standalone} \usepackage{tikz} \usetikzlibrary{math} \tikzmath{ ...
3
Or with just one \foreach: \documentclass{article} \usepackage{pgfplots} \begin{document} \begin{tikzpicture} \begin{axis}[grid=major, xmin=-5, xmax=5, ymin=-30, ymax=30, xlabel=$t$, ylabel=$y$] \foreach \Valor [count=\Cont] in {-2.4,-2.1,...,2.4} { \ifodd\Cont\relax \def\Color{cyan} \def\Shape{} \else \def\Color{red} \def\Shape{dashed} ...
3
Another way, using a single \foreach and a user-defined cycle list. This cycle list can then be used throughout your document if desired. \documentclass{article} \usepackage{pgfplots} \pgfplotscreateplotcyclelist{mylist}{{color=cyan},{color=red,dashed}} % can now be used throughout the document \begin{document} \begin{tikzpicture} \begin{axis}[ ...
3
Possibly something like this? This was produced using the pin facility for labelling nodes. Note that I've updated your code to use \tikzset consistently since \tikzstyle is deprecated. \documentclass[tikz, border=10pt]{standalone} \begin{document} \tikzset{% point/.style = {fill=black,inner sep=1pt, circle, minimum ...
3
The dateplot library shipped with pgfplots cannot handle seconds due to limited accuracy. If you need accuracy of this granularity, you can probably ignore the DATE part of your input. In this case, a solution could be as follows: \documentclass{standalone} \usepackage{pgfplots} \usepgfplotslibrary{dateplot} \def\checkSameDate#1{% \ifnum\coordindex=0 ...
3
Based on the information that MikTeX actually failed to update PGF to 3.0.0, I am able to confirm that this is a duplicate of Problem using atan in pgfplots and Miktex 2.9 pgfplots, circuitikz library collision problem . I will update Problem using atan in pgfplots to provide workarounds. There are actually two distinct problems: I introduced an ...
3
I'm not sure at all how you are determining the width of the line segment, but, unless you need the line to be exactly 0.01pt in width, TikZ certainly allows you to construct lines with extremely narrow widths % arara: pdflatex % arara: pdflatex % arara: open \documentclass[border=10pt]{standalone} \usepackage{tikz} \begin{document} ...
3
You can use the angles library which defines a pic for this purpose. The quotes library is used for ease of labelling. \documentclass[tikz,border=10pt]{standalone} \usetikzlibrary{calc,patterns,angles,quotes} \begin{document} \begin{tikzpicture} \coordinate (origo) at (0,0); \coordinate (pivot) at (1,5); % draw axes \fill[black] (origo) ...
3
As cfr said in the comment, there is easier solution for your particular problem by using shorten. Now for your question, the answer is : No, there are no anchor names that seats on the node line. But you can create them. If you want one particular anchor, let say "west 1mm inside", you can use something like label={[coordinate, name=in-west, label ...
3
Now as TikZ 3.0 is here we can use pic. \documentclass[tikz,border=5]{standalone} \tikzset{ pics/carc/.style args={#1:#2:#3}{ code={ \draw[pic actions] (#1:#3) arc(#1:#2:#3); } } } \begin{document} \begin{tikzpicture} \pic{carc=-30:30:2cm}; \draw[thick] (4,0) circle (1 cm) pic[red, -latex]{carc=100:150:1.3cm}; \end{tikzpicture} ...
3
This is an adaption of Kpym's answer. While Kpym's solution is closer to your stated desiderata, mine is shorter and more flexible in that it allows you to select any colour. \documentclass[border=5pt,tikz]{standalone} \usetikzlibrary{calc} \tikzset{ C/.style={circle}, theme color/.style={C/.append style={fill=#1!50}}, } \tikzset{theme color=magenta} ...
3
This should help you to get started. I used mostly the decorations.markings library for positioning. You may want to check section 48.5 of the mighty PGF manual for more info. \documentclass{article} \usepackage{tikz} \usetikzlibrary{decorations.markings} \begin{document} \begin{tikzpicture}[every node/.style={font=\tiny\itshape},decoration={markings, ...
Only top voted, non community-wiki answers of a minimum length are eligible |
### Sinusoidal
"Sinusoid" redirects here. For the blood vessel, see Sinusoid (blood vessel).
The sine wave or sinusoid is a mathematical curve that describes a smooth repetitive oscillation. It is named after the function sine, of which it is the graph. It occurs often in pure and applied mathematics, as well as physics, engineering, signal processing and many other fields. Its most basic form as a function of time (t) is:
$y\left(t\right) = A \cdot \sin\left(2 \pi f t + \phi\right) = A \cdot \sin\left(\omega t + \phi\right)$
where:
• A, the amplitude, is the peak deviation of the function from zero.
• f, the ordinary frequency, is the number of oscillations (cycles) that occur each second of time.
• ω = 2πf, the angular frequency, is the rate of change of the function argument in units of radians per second
• φ, the phase, specifies (in radians) where in its cycle the oscillation is at t = 0.
• When φ is non-zero, the entire waveform appears to be shifted in time by the amount φ/ω seconds. A negative value represents a delay, and a positive value represents an advance.
Sine wave File:220 Hz sine wave.ogg 5 seconds of a 220 Hz sine wave Problems playing this file? See media help.
The sine wave is important in physics because it retains its waveshape when added to another sine wave of the same frequency and arbitrary phase and magnitude. It is the only periodic waveform that has this property. This property leads to its importance in Fourier analysis and makes it acoustically unique.
## General form
In general, the function may also have:
• a spatial dimension, x (aka position), with wavenumber k
• a non-zero center amplitude, D
which is
$y\left(x,t\right) = A\cdot \sin\left(\omega t - kx + \phi \right) + D.\,$
The wavenumber is related to the angular frequency by:.
$k = \left\{ \omega \over c \right\} = \left\{ 2 \pi f \over c \right\} = \left\{ 2 \pi \over \lambda \right\}$
where λ is the wavelength, f is the frequency, and c is the speed of propagation.
This equation gives a sine wave for a single dimension, thus the generalized equation given above gives the amplitude of the wave at a position x at time t along a single line. This could, for example, be considered the value of a wave along a wire.
In two or three spatial dimensions, the same equation describes a travelling plane wave if position x and wavenumber k are interpreted as vectors, and their product as a dot product. For more complex waves such as the height of a water wave in a pond after a stone has been dropped in, more complex equations are needed.
## Occurrences
This wave pattern occurs often in nature, including ocean waves, sound waves, and light waves.
A cosine wave is said to be "sinusoidal", because $\cos\left(x\right) = \sin\left(x + \pi/2\right),$ which is also a sine wave with a phase-shift of π/2. Because of this "head start", it is often said that the cosine function leads the sine function or the sine lags the cosine.
The human ear can recognize single sine waves as sounding clear because sine waves are representations of a single frequency with no harmonics; some sounds that approximate a pure sine wave are whistling, a crystal glass set to vibrate by running a wet finger around its rim, and the sound made by a tuning fork.
To the human ear, a sound that is made up of more than one sine wave will either sound "noisy" or will have detectable harmonics; this may be described as a different timbre.
## Fourier series
Main article: Fourier analysis
In 1822, Joseph Fourier, a French mathematician, discovered that sinusoidal waves can be used as simple building blocks to describe and approximate any periodic waveform including square waves. Fourier used it as an analytical tool in the study of waves and heat flow. It is frequently used in signal processing and the statistical analysis of time series.
## Traveling and standing waves
Since sine waves propagate without changing form in distributed linear systems, they are often used to analyze wave propagation. Sine waves traveling in two directions can be represented as
$y\left(t\right) = A \sin\left(\omega t - kx\right)$ and $y\left(t\right)= A \sin\left(\omega t + kx\right).$
When two waves having the same amplitude and frequency, and traveling in opposite directions, superpose each other, then a standing wave pattern is created.
## References
• Template:Springer
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
# Wikiwand Website & Extensions
1. ## Why Facebook for bookmarks?
Let us have bookmarks that are not associated with Facebook. Why can't we just use our Wikipedia logins or create a login for Wikiwand?
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
Recently, I was logged out and cannot login again.
Developer Tool tells that,
<a rel="nofollow noreferrer" href="https://www.wikiwand.com/api/user/facebook-authenticated">https://www.wikiwand.com/api/user/facebook-authenticated</a> 504 GATEWAY_TIMEOUT. Could you help? Thanks!
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
3. ## allow customisation of background, font, visited links, un-visited links colors
Allow customisation of background, font, visited links, un-visited links colors under personalize menu of the wikiwand menu bar. At present there are visibility issues, especially every one has different visual spectrum acuity, not to mention the differentials in each monitor's color profile settings, effects of aging of the electronic components, degradation of the light emitting crystals, ambient room lighting, relative position of their monitors versus the window, etc... I believe that no matter how well you tweak your official wikiwand colors, no single set of color profile will ever satisfy all wikiwand users. Simplest solution i think is to allow…
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
4. ## Add a built in player for audio files (.mid, .ogg)
It's really annoying having to download these files just to play them. Also .ogg is not supported without additional downloads of codecs etc, so it's difficult for the average user to even listen to these files.
If you guys could fix that, it'd be a big reason for me to use your app over the original site. Thanks!
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
5. ## Develop a Mac App Store version
The Safari extension cannot be found any more in the extensions gallery, to which I am redirected when trying to download Wikiwand on Safari (macOS High Sierra, 10.13.4).
It seems that new Safari extensions are available through the Mac App Store.
It would be great to have Wikiwand among those apps!
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
6. ## Dark Mode doesn't display formulas properly, it gets blurred into the black background.
Here is a screenshot of what I mean : http://i.imgur.com/A5lna50.png
I hope this can get fixed. Thank you.
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
7. ## Very simple way to fix dark mode equation display problem. Literally 10 character fix included.
If you are using the dark mode setting with wikiwand pages that contain complex mathematical portions become impossible to see because they are rendered in black against the black background. This is super annoying and after getting tired of having to work around this I realized a code addition of just 10 characters will fix this issue.
The problem arises because wikipedia renders complex math written in latex as SVG objects on the page. Wikiwand does nothing to these images, leaving them impossible to see.
The fix is to add filter:invert(90%) to the img style tag. That's all. It's that…
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
9. ## Toggle between Wikipedia and Wikiwand
provide a toggle switch for alternating between Wikiwand and Wikipedia
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
10. ## Lang="bg" for Bulgarian Wikipedia
I suggest you to add the code lang="bg" to the Bulgarian section in Wikipedia. The code will start localization for Bulgarians of the Cyrillic serif font Lora.
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
11. ## Restore functionality of the 'Back' button
Currently pressing Back tries to navigate to the Wikipedia article, and hence redirects back to the Wikiwand article. This has the effect of disabling the back button unless you right-click it and select the page before the Wikipedia article. Please look into fixing this (if it can be done).
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
12. ## Safari 12 has become more restrictive – Wikiwand does not work anymore
As of Safari 12 extension have to be approved by Apple and need to be downloaded from the App Store. Please make Wikiwand available on the App Store.
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
13. ## Put a sign that an article is featured or rated "good" or "A" by Wikipedia
Wikiwand's polishing belies the bad shape the content of many articles on Wikipedia have. One would like to know an article is going to be as great as it looks to enjoy it the most.
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
14. ## Fix raw LaTeX in Titles in the Menu
Wikiwand displays the raw LaTeX when using LaTeX in titles. This also makes the Entry unclickable for some reason.
This can be seen here for example https://www.wikiwand.com/en/ControlledNOTgate
You can see it with the "Constructing the Bell State" entry. It shows "{\displaystyle |\Phi ^{+}\rangle }" instead of a Rendered version.
You can try clicking the entry. The URL changes but nothing happens(you don't jump to the Section as usual).
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
15. ## add save to pocket and evernote in the sharing options
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
16. ## Release Wikiwand as a MediaWiki skin/extension?
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
17. ## Why does Wikiwand now require new permissions
I'm just curious as to why it needs these new permissions, one of which is to INSTALL OTHER EXTENSIONS.
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
18. ## Fix tables not scrolling horizontally.
Tables do not currently scroll properly when their width is larger than the window.
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
19. ## make your logo better (sorry).
The wikiwand logo looks like something from 2006. Great software design but that logo...
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…)
20. ## Please make the pictures in these "Overview boxes" clickable!
With this I mean the boxes like "periodic table" on each element site or the box on this site (http://www.wikiwand.com/de/KirchhoffscheRegeln ) or here (http://www.wikiwand.com/en/Capricorn(astrology) ).
The first one seems to be clickable but the periodic table is not visible... the second one is visible but not clickable.
(And sadly not on every page which it links to, but that is not a problem of Wikiwand, but if someone has more knowledge of Wikipedia than I do, I'd be glad if he/she would edit this template on every site where it links to, so you can easily go…
Vote
(thinking…)
Signed in as (Sign out)
You have left! (?) (thinking…) |
# Ex.5.2 Q20 Arithmetic Progressions Solution - NCERT Maths Class 10
Go back to 'Ex.5.2'
## Question
Ramkali saved $$\rm{Rs}\, 5$$ in the first week of a year and then increased her weekly saving by $$\rm{Rs}\, 1.75.$$ If in the $$n^\rm{th}$$ week, her weekly savings become $$\rm{Rs}\, 20.75,$$ find $$n.$$
Video Solution
Arithmetic Progressions
Ex 5.2 | Question 20
## Text Solution
What is Known:?
Savings in first week $$\rm{Rs}\, 5$$ and increment of $$\rm{Rs}\, 1.75$$ weekly in savings.
What is Unknown?
Week in which her savings become $$\rm{Rs}\, 20.75$$
Reasoning:
$${a_n} = a + \left( {n - 1} \right)d$$ is the general term of AP. Where $${a_n}$$ is the $$n^\rm{th}$$ term, $$a$$ is the first term, $$d$$ is the common difference and $$n$$ is the number of terms.
Steps:
From the given data, Ramkali’s savings in the consecutive weeks are
$$\rm{Rs}\, 5, \,\rm{Rs}\, (5+1.75),\, \rm{Rs}\, (5+2×1.75), \\\rm{Rs}\, (5+3×1.75)\, \dots$$ and so on
Hence in $$n^\rm{th}$$ weeks savings, $$\rm{Rs}\, [5+(n−1)×1.75] = \rm{Rs}\, 20.75$$
Now from the above we know that
\begin{align} a&=5 \\ d&=1.75 \\{{a}_{n}}&=20.75 \\n&=?\end{align}
We know that the $$n^\rm{th}$$ term of an A.P. Series,
\begin{align}{a_n} &= a + (n - 1)d\\20.75& = 5 + (n - 1)1.75\\15.75 &= (n - 1)1.75\n - 1)& = \frac{{15.75}}{{1.75}}\\n - 1 &= \frac{{1575}}{{175}}\\n - 1 &= 9\\n &= 10\end{align} The answer is \(n = 10
Learn from the best math teachers and top your exams
• Live one on one classroom and doubt clearing
• Practice worksheets in and after class for conceptual clarity
• Personalized curriculum to keep up with school |
## Saturday, October 4, 2014
### Tight binding DFT
Tight binding DFT (DFTB) is a semi-empirical method with speed and accuracy similar to NDDO-based semiempirical methods such as AM1, PM3, and PM6. Currently there are three types of DFTB methods called DFT1, DFTB2, and DFTB3. DFTB1 and DFTB2 are sometimes called non-SCC DFTB (non-selfconsistent charge) and SCC-DFTB, respectively. DFTB3 is generally considered the most accurate for molecules and there are several parameter sets for DFTB2 and DFTB3 for different elements. Compared to PM6, DFTB has so far been parameterized for relatively few elements.
DFTB1
The closed shell DFTB1 energy is computed from the following equation
$$E^{\text{DFTB1}}=\sum_i^{N/2} \sum_\mu^K \sum_\nu^K 2 C_{\mu i} C_{\nu i} H^0_{\mu\nu}+\sum_A \sum_{B>A} E^{\text{rep}}_{AB}$$
where $C_{\mu i}$ is the molecular orbital coefficient for MO $i$ and basis function $\mu$.
$$H^0_{\mu\nu}= \begin{cases} \varepsilon^{\text{free atom}}_{\mu\mu} & \text{if } \mu=\nu \\ 0 & \text{if } A=B, \mu\ne\nu\\ \langle \chi_\mu | \hat{T}+V_{\text{eff}}[\rho_0^A+\rho_0^B] | \chi_\nu \rangle & \text{if } A\ne B \end{cases}$$
Here, $\varepsilon^{\text{free atom}}_{\mu\mu}$ is an orbital energy of a free atom, $\chi$ is a valence Slater-type orbital (STO) or numerical orbital, $\hat{T}$ is the electronic kinetic energy operator, $V_{\text{eff}}$ is the Kohn-Sham potential (electron-nuclear attraction, electron-electron repulsion, and exchange correlation), and $\rho_0^A$ is the electron density of neutral atom $A$.
DFT calculations on free atoms using some functional yield $\left\{\varepsilon^{\text{free atom}}_{\mu\mu} \right\}$, $\left\{\chi\right\}$, and $\rho_0$, which are then used to compute $H^0_{\mu\nu}$ for A-B atom pairs at various separations $R_{AB}$ and stored. When performing DFTB calculations $H^0_{\mu\nu}$ is simply computed for each atom pair A-B by interpolation using this precomputed data set.
Similarly, the overlap matrix $\left\{ \langle \chi_\mu | \chi_\nu \rangle \right\}$ need to orthonormalize the MOs are computed for various distances and stored for future use.
$E^{\text{rep}}_{AB}$ is an empirical repulsive pairwise atom-atom potential with parameters adjusted to minimize the difference in atomization energies, geometries, and vibrational frequencies computed using DFTB and DFT or electronic structure calculations for set of molecules.
So, a DFTB1 calculation is performed by constructing $\mathbf{H}^0$, diagonalizing it to yield $\mathbf{C}$, and then computing $E^{\text{DFTB1}}$.
DFTB2
$$E^{\text{DFTB2}}=E^{\text{DFTB1}}+\sum_A \sum_{B>A} \gamma_{AB}(R_{AB})\Delta q_A\Delta q_B$$
where $\Delta q_A$ is the Mulliken charge on atom $A$ and $\gamma_{AB}$ is a function of $R_{AB}$ that tends to $1/R_{AB}$ at long distances.
The Mulliken charges depend on $\mathbf{C}$ so a selfconsistent calculation is required:
1. Compute DFTB1 MO coefficients, $\mathbf{C}$
2. Use $\mathbf{C}$ to compute $\left\{ \Delta q \right\}$
3. Construct and diagonalize $H_{\mu \nu}$ to get new MO coefficients, $\mathbf{C}$
$$H_{\mu \nu}=H_{\mu \nu}^0 + \frac{1}{2} S_{\mu\nu} \sum_C (\gamma_{AC}+\gamma_{BC})\Delta q_C, \mu \in A, \nu \in B$$
4. Repeat steps 2 and 3 until selfconsistency.
DFTB3
$$E^{\text{DFTB3}}=E^{\text{DFTB2}}+\sum_A \sum_{B>A} \sum_{C>B>A} \Gamma_{AB}\Delta q_A^2\Delta q_B$$
$\Gamma_{AB}$ is computed using interpolation using precomputed data. A SCF calculation is required.
Parameter sets and availability
DFTB is available in a variety of software packages. I don't believe DFTB3 is currently in Gaussian and DFTB is also available in CHARMM and CP2K. DFTB will soon be available in GAMESS.
Note that each user/lab must download the parameter file separately here. There are several parameter sets. The most popular sets for molecules are the MIO (materials and biological systems) for DFTB2 and 3OB (DFT3 organic and biological applications).
Dispersion and hydrogen bond corrections
Just like DFT and PM6, the DFTB can be corrected for dispersion and hydrogen bond effect.
## Wednesday, October 1, 2014
### Computational Chemistry Highlights: September issue
The September issue of Computational Chemistry Highlights is out.
CCH is an overlay journal that identifies the most important papers in computational and theoretical chemistry published in the last 1-2 years. CCH is not affiliated with any publisher: it is a free resource run by scientists for scientists. You can read more about it here.
Table of content for this issue features contributions from CCH editors Steven Bachrach and Jan Jensen:
Why Bistetracenes Are Much Less Reactive Than Pentacenes in Diels–Alder Reactions with Fullerenes |
# Continuity of cylindrical functions.
Let $C_c^\infty(\mathbb R^n)$ be the functions from $\mathbb R^n$ to $\mathbb R$ with compact support, further let $X$ be a separable Hilbert space with a fixed orthonormal basis $(e_n)_n$. Define the cylindrical functions:
\begin{align*} \text{Cyl}(X) := \{f : X \to &\mathbb R : \text{there exists } d \in \mathbb{N} \text{ and } \phi \in C_c^\infty(\mathbb R^n) \text{ such that }\\\ &f(x) = \phi(\langle x, e_1 \rangle, \ldots , \langle x, e_d \rangle ) \text{ for all } x \in X \} \end{align*}
Now, if $\langle . , . \rangle$ is the inner product on $X$ define $$\langle x, y \rangle_\omega := \sum_n \frac{1}{n^2} \langle x, e_n \rangle \langle e_n, y \rangle$$
Now, why is every $f \in \text{Cyl}(X)$ continuous with respect to $\langle . , . \rangle_\omega$? Sure, it is Lipschitz and continuous with respect to the weak topology (because it is with respect to the strong topology). Further I know that on bounded sets, the topology induced by $\langle . , . \rangle_\omega$ is the same as the weak topology. However, $f^{-1}(A)$ does not have to be bounded. What am I missing, I'm sure it is easy.
-
I've usually seen cylinder functions defined by $f(x) = \phi(\langle x, u_1 \rangle, \dots, \langle x, u_d \rangle)$ where $u_1, \dots, u_d$ may be any elements of $X$, not just elements of the chosen orthonormal basis. My class of functions seems to be strictly larger than yours... – Nate Eldredge Oct 27 '10 at 22:56
@Nate: Gradient Flows in Metric Spaces and in the Space of Probability Measures by Ambrosio et al uses the definition I give (pg 113). Do you have a reference where I can find your version? – Jonas Teuwen Oct 27 '10 at 23:39
For each $j$, the inequality $$|\langle x , e_j\rangle| = j \sqrt{\langle x , e_j\rangle\langle e_j , x\rangle\over j^2}\leq j\sqrt{\langle x , x\rangle_\omega}$$ shows that the map $x\mapsto \langle x , e_j\rangle$ is continuous from $(X , \langle\cdot ,\cdot\rangle_\omega)$ to $\mathbb R$. Thus, for fixed $d$ the map $x\mapsto (\langle x , e_1\rangle, \dots,\langle x , e_d\rangle)$ is continuous from $(X , \langle\cdot ,\cdot\rangle_\omega)$ into ${\mathbb R}^d$. Composing with the smooth map $\phi:{\mathbb R}^d\to {\mathbb R}$ gives you a continuous function from $(X , \langle\cdot ,\cdot\rangle_\omega)$ into $\mathbb R$ again.
You're welcome. When I was a student, I briefly thought that the $\langle \cdot , \cdot\rangle_\omega$ topology was the same as the weak topology. I'm glad to see that you didn't fall into that trap! – Byron Schmuland Oct 27 '10 at 21:21 |
# Did Deborah Mayo refute Birnbaum's proof of the likelihood principle?
This is somewhat related to my previous question here: An example where the likelihood principle *really* matters?
Apparently, Deborah Mayo published a paper in Statistical Science refuting Birnbaum's proof of the likelihood principle. Can anyone explain the main argument by Birnbaum and the counter-argument by Mayo? Is she right (logically)?
In a nutshell, Birnbaum's argument is that two widely accepted principles logically imply that the likelihood principle must hold. The counter-argument of Mayo is that the proof is wrong because Birnbaum misuses one of the principles.
Below I simplify the arguments to the extent that they are not very rigorous. My purpose is to make them accessible to a wider audience because the original arguments are very technical. Interested readers should see the detail in the articles linked in the question and in the comments.
For the sake of concreteness, I will focus on the case of a coin with unknown bias $$\theta$$. In experiment $$E_1$$ we flip it 10 times. In experiment $$E_2$$ we flip it until we obtain 3 "tails". In experiment $$E_{mix}$$ we flip a fair coin with labels "1" and "2" on either side: if it lands a "1" we perform $$E_1$$; if it lands a "2" we perform $$E_2$$. This example will greatly simplify the discussion and will exhibit the logic of the arguments (the original proofs are of course more general).
The principles:
The following two principles are widely accepted:
The Weak Conditionality Principle says that we should draw the same conclusions if we decide to perform experiment $$E_1$$, or if we decide to perform $$E_{mix}$$ and the coin lands "1".
The Sufficiency Principle says that we should draw the same conclusions in two experiments where a sufficient statistic has the same value.
The following principle is accepted by the Bayesian but not by the frequentists. Yet, Birnbaum claims that it is a logical consequence of the first two.
The Likelihood Principle says that we should draw the same conclusions in two experiments where the likelihood functions are proportional.
Birnbaum's theorem:
Say we perform $$E_1$$ and we obtain 7 "heads" out of ten flips. The likelihood function of $$\theta$$ is $${10 \choose 3}\theta^7(1-\theta)^3$$. We perform $$E_2$$ and need to flip the coin 10 times to obtain 3 "tails". The likelihood function of $$\theta$$ is $${9 \choose 7}\theta^7(1-\theta)^3$$. The two likelihood functions are proportional.
Birnbaum considers the following statistic on $$E_{mix}$$ from $$\{1, 2\} \times \mathbb{N}^2$$ to $$\{1, 2\} \times \mathbb{N}^2$$: $$T: (\xi, x,y) \rightarrow (1, x,y),$$ where $$x$$ and $$y$$ are the numbers of "heads" and "tails", respectively. So no matter what happens, $$T$$ reports the result as if it came from experiment $$E_1$$. It turns out that $$T$$ is sufficient for $$\theta$$ in $$E_{mix}$$. The only case that is non-trivial is when $$x = 7$$ and $$y = 3$$, where we have
$$P(X_{mix}=(1,x,y)|T=(1,x,y)) = \frac{0.5 \times {10 \choose 3}\theta^7(1-\theta)^3}{0.5 \times {10 \choose 3}\theta^7(1-\theta)^3 + 0.5 \times {9 \choose 7}\theta^7(1-\theta)^3}\\=\frac{{10 \choose 3}}{{10 \choose 3}+{9 \choose 7}}\text{, a value that is independent of } \theta.$$ All the other cases are 0 or 1—except $$P(X_{mix}=(2,x,y)|T=(1,x,y))$$, which is the complement of the probability above. The distribution of $$X_{mix}$$ given $$T$$ is independent of $$\theta$$, so $$T$$ is a sufficient statistic for $$\theta$$.
Now, according to the sufficiency principle, we must conclude the same for $$(1,x,y)$$ and $$(2,x,y)$$ in $$E_{mix}$$, and from the weak condionality principle, we must conclude the same for $$(x,y)$$ in $$E_1$$ and $$(1,x,y)$$ in $$E_{mix}$$, as well as for $$(x,y)$$ in $$E_2$$ and $$(2,x,y)$$ in $$E_{mix}$$. So our conclusion must be the same in all cases, which is the likelihood principle.
Mayo's counter-proof:
The setup of Birnbaum is not a mixture experiment because the result of the coin labelled "1" and "2" was not observed, therefore the weak conditionality principle does not apply to this case.
Take the test $$\theta = 0.5$$ versus $$\theta > 0.5$$ and draw a conclusion from the p-value of the test. As a preliminary observation, note that the p-value of $$(7,3)$$ in $$E_1$$ is given by the binomial distribution as approximately $$0.1719$$; the p-value of $$(7,3)$$ in $$E_2$$ is given by the negative binomial distribution as approximately $$0.0898$$.
Here comes the important part: the p-value of $$T=(1,7,3)$$ in $$E_{mix}$$ is given as the average of the two—remember we do not know the status of the coin—i.e. approximately $$0.1309$$. Yet the p-value of $$(1,7,3)$$ in $$E_{mix}$$—where the coin is observed—is the same as that in $$E_1$$, i.e. approximately $$0.1719$$. The weak conditionality principle holds (the conclusion is the same in $$E_1$$ and in $$E_{mix}$$ where the coin lands "1") and yet the likelihood principle does not. The counter-example disproves Birnbaum's theorem.
Peña and Berger's refutation of Mayo's counter-proof:
Mayo implicitly changed the statement of the sufficiency principle: she interprets "same conclusions" as "same method". Taking the p-value is an inference method, but not a conclusion. This is important because an agent can come to identical conclusions even when two p-values are different. This is not meant in the sense that you accept the null hypothesis if the p-value is 0.8 or 0.9, but in the sense that the two p-values of Mayo are computed from different experiments (different probability spaces with different outcomes), so with this information at hand you can draw the same conclusion even if the values are different.
The sufficiency principle says that if there exists a sufficient statistic, then the conclusions must be the same, but it does not require the sufficient statistic to be used at all. If it did, it would lead to a contradiction, as demonstrated by Mayo.
• As a side note, one may question the value of founding principles if nobody can really tell when and how they apply. I wonder why the axiomatic method works well for probability but not so much for the theory of statistics. – gui11aume Apr 22 '19 at 9:14
• The discussion relies crucially on how the weak conditionality principle and the sufficiency principle are exactly formulated and applied in the proof, which isn't quite clear in the posting. I don't want to criticise the poster, as it's very difficult to elaborate these things completely, and what is written isn't wrong. But logically it can't be clear to the reader from this whether Mayo is right, or whether Pena and Berger are right - and actually it depends on the precise formulation and there are possibilities to do this that would make right the claim of either side. – Lewian Jan 9 at 14:19
• @Lewian thanks for your comment. I agree with you, the detail matters, which makes it hard to simplify the ideas (someone with the skills could just look at the original articles). From what I gathered, it's not clear to anyone who is right: those who hate the likelihood principle seem to agree with Mayo, those who like it seem to disagree with her. Anyway, if you have ideas of how to improve the post, please go ahead, I won't be offended. Quite the reverse, I wish to make it as useful as possible, but I am far from an expert on this topic. – gui11aume Jan 10 at 16:08
• Actually I believe I have a pretty good understanding of this, and I have discussed it with various people including D. Mayo herself and (from the other side) Phil Dawid, although both of them would think the other one is wrong and wouldn't therefore agree with my defense of what the other one is saying. Unfortunately it is complicated enough that I don't think it's realistic that I can find the time to properly elaborate it for posting it here any time soon, which of course means that nobody needs to believe what I claim... – Lewian Jan 10 at 16:13
• @Lewian that's amazing! Would you at least have the time to briefly sketch the arguments and point to existing references? If so, you could send it to me (easy to find the contact details from my profile) and perhaps I could improve the answer. I cannot do this soon either, but one day I could come back to it... – gui11aume Jan 10 at 17:07 |
# Limits & Riemman Sum
## Homework Statement
Question regarding #16
Riemman Sum
## The Attempt at a Solution
I know that the limit of the Riemman Sum is basically the integral. However, I do not know where to go from there. Do I need to use the Summation formulas? Thanks
## Answers and Replies
Related Calculus and Beyond Homework Help News on Phys.org
rbj
can you write down the expression for a Riemann integral. something like:
$$\int_a^b f(x) dx = \lim_{N \to \infty} \sum_{n=1}^N ???$$
what goes in the question marks?
also, even though Riemann doesn't, assume things in the ??? are equally spaced. that's usually good enough.
Last edited:
can you write down the expression for a Riemann integral. something like:
$$\int_a^b f(x) dx = \lim_{N \to \infty} \sum_{n=1}^N ???$$
what goes in the question marks?
also, even though Riemann doesn't, assume things in the ??? are equally spaced. that's usually good enough.
Well, here's what I have so far:
$$\int_a^b f(x) dx = \lim_{N \to \infty} \sum_{n=1}^N (\frac{1}{N}\sin(\frac{\pi i}{N})$$
HallsofIvy |
# On Integration
1. Feb 22, 2010
I am having trouble figuring out how an integration works.
Consider the function, y = x2 . The antiderivative is x3 / 3 or in other words its x2 (x/3). How come multiplying the original function by (x / n) (where n represents the exponent + 1) will lead to the area under the curve.
Put in another way, let's define the above in terms of a graph. Consider the function y = x2 and you want to find the area from 0,1.
All you are doing to find the area is x2 (x/3) . You are creating a rectangle in which the dimensions are x2 by (x/3). If the function we were dealing with were x3, the rectangle that would yield the area under the curve would be x3 by ( x / 4)
http://img237.imageshack.us/img237/6906/image1sj.png [Broken]
How come the above rectangle just happened to be the total area under the curve? How did this work out and why does it work out?
In order to try and figure this out, I searched the history of calculus and found this from wiki, but I didn't really understand it:
In this paper, Newton determined the area under a curve by first calculating a momentary rate of change and then extrapolating the total area. He began by reasoning about an indefinitely small triangle whose area is a function of x and y. He then reasoned that the infinitesimal increase in the abscissa will create a new formula where x = x + o (importantly, o is the letter, not the digit 0). He then recalculated the area with the aid of the binomial theorem, removed all quantities containing the letter o and re-formed an algebraic expression for the area. Significantly, Newton would then “blot out” the quantities containing o because terms “multiplied by it will be nothing in respect to the rest”.
Last edited by a moderator: May 4, 2017
2. Feb 22, 2010
### Werg22
If you read any standard calculus book, you'll see how formulas for areas under such curves are derived, and why they are true.
The equality of the areas of the region you have in mind and the area of that rectangle happen to be a "fun fact", I don't think there's a way to show this geometrically, it's just something that we can observe from the formula we get for the area under the curve x^n.
Also, the area of the rectangle is not the total area under the curve, it's area under the from 0 to x.
3. Feb 23, 2010
Do you have any specific books in mind?
Sorry, that's what I meant.
I am just having trouble understanding why multiplying the original function by (x/n) can lead to the area under the curve.
4. Feb 23, 2010
### l'Hôpital
It's a coincidence. Kinda like how sum of the first n odd numbers is n^2 or how
$$1^3 + 2^3 + 3^3 + ... = (1 + 2 + 3 + ...)^2$$
We can prove these formulas are true, but the fact that they are true are very neat coincidences.
5. Feb 24, 2010 |
# Useless math that became useful
I'm writing an article on Lychrel numbers and some people pointed out that this is completely useless.
My idea is to amend my article with some theories that seemed useless when they are created but found use after some time.
I came with some ideas like the Turing machine but I think I'm not grasping the right examples.
Can someone point me some theories that seemed like the Lychrel numbers and then become 'useful'?
Edit: As some people pointed out that I've published this on MSE I present a code here to find some candidates as Lychrel numbers.
def reverseNum(n):
st = str(n)
return int("".join([st[i] for i in xrange(len(st)-1,-1,-1)]))
def isPalindrome (n):
st = str(n)
rev = str(reverseNum(st))
return st==rev
def isLychrel (n, num_interations):
p = n
for i in xrange(num_interations):
if isPalindrome(p):
return i
p = p + reverseNum(p)
return -1
for i in xrange(1000):
p = isLychrel(i,100)
if (p < 0):
print i,p
-
What about math that was once useful but now useless? Like all of the tricks engineers had to use to multiply using slide rules... – Brian Rushton Dec 17 '12 at 19:12
This sort of appendix seems contrary to the nature of mathematics. The argument isn't countered by providing a list of other ideas that people might have said were useless. Instead, why not focus on the education aspects? According to the Wikipedia article, the search has led a few computer programmers into what is ostensibly number theory, and may have introduced many young people to a fundamental idea behind proofs - even if you haven't found a palindrome by $10^9, there might still be one. Sounds a lot like Skewes' number, also probably called useless. – Zack Wolske Dec 17 '12 at 19:13 I think that "usefulness" is probably not the correct measure for mathematics. Other properties, such as beautiful results, the occurrence of complicated structure, or the use of unexpected techniques are also good ways of judging math. – André Henriques Dec 17 '12 at 21:39 It is interesting to look for things that turned out to be much more useful than initially thought, but I think you ought to look for reasons that you know for studying Lychrel numbers instead of hoping that more will come in the future. It seems to me like the primary motivation is that this is a simple question that seems like it should be easy to answer, but apparently isn't, so by searching for the answer, we may come to understand the integers better. – Miles Dec 18 '12 at 7:05 I agree with Zack. If you don't see a way of arguing directly for the usefulness of Lychrel numbers research (arguing for the usefulness of other allegedly useless results in mathematics is really no argument at all for the case in question), then don't go into it. Focus on other types of payoff. – Todd Trimble Jan 15 at 13:18 show 5 more comments ## 7 Answers Number theory, in particular investigations related to prime numbers, was famously considered as useless (cf Hardy), for practical matters. Now, since "everybody" needs some cryptography it is quite useful to know how to generate primes (eg, for an RSA key) and alike, sometimes involving prior 'useless' number theory results. - This answer being given, let me add that I am not convinced your idea regarding the article is a good one. – quid Dec 17 '12 at 18:20 Number theory may have been regarded as "useless" in this sense, but was it ever considered to be as useless as Lychrel numbers? – Franz Lemmermeyer Dec 17 '12 at 18:23 It is also true that in a perfect world --say, a world ruled by mathematicians, cryptography itself would be useless: why to hide things? – Pietro Majer Dec 17 '12 at 19:08 Pet peeve: "cf" stands for "conferre", which means "to compare"; you are using it as reference or a "see for example". Though an extremely common usage, it is incorrect. "cf" should be used for "compare with", and you don't want to compare the writings of Hardy with the statement that Number Theory was considered useless; rather, you want to use Hardy's writings as a reference to the assertion that Number Theory was considered useless... – Arturo Magidin Dec 17 '12 at 22:20 @Arturo Magidin: yes, the usage is a bit odd here; also but perhaps not only as I changed the phrase after the reference was typed. I might sometimes use it strangely or even in a wrong way, but I can assure you I do know the meaning. Incidentally, since we are discussing such matters, it seems to me it actually does not stand for (the infinitive) 'conferre.' :) – quid Dec 17 '12 at 23:54 show 6 more comments The Radon transform, when introduced by Johann Radon in 1917, was useless, until Cormack and Hounsfield developed Tomography in the 60's (Nobel prize for medicine 1979). - add comment The most famous example is conic sections. Conic sections were of great interest to Greek mathematicians, and their theory was highly developed in the 2-nd century BC. However I don't know of any application until Kepler's discovery that the celestial bodies move on conic sections. This 18 centuries passed between math research and the first application! - This actually seems to be a non-example. Conic sections were apparently first studied by Menaechmus in the 4th centure BCE. We're not sure what his motivation was, but he definitely used them in his method of doubling the cube. Some speculate that this problem led him to discover conics; others suggest that he was prompted by the fact that the tip of a sundial traces a hyperbola on any given day (outside the Arctic circles, anyway). In any case, it looks like conic sections had applications as soon as people knew about them. – Miles Dec 18 '12 at 6:57 Thanks for this information. Could you cite some source of this information? Doubling the cube is not a serious application, and conics certainly do not help here, but the shade path on the sundial seems to be an application. Still hard to imagine that this application justified the treatise of Apollonius... – Alexandre Eremenko Dec 18 '12 at 14:51 en.wikipedia.org/wiki/Menaechmus (cites Boyer's and Cooke's history of math texts) www-history.mcs.st-andrews.ac.uk/Biographies/Menaechmus.html www-history.mcs.st-and.ac.uk/HistTopics/Sundials.html W.W. Dolan: Early Sundials and the Discovery of the Conic Sections, Mathematics Magazine, 1972. 45(1): p. 8-12. – Miles Dec 18 '12 at 16:02 Sorry about the formatting there---I tried to make it nice . . . – Miles Dec 18 '12 at 16:08 Conic sections were apparently used by the Greeks (possibly Archimedes) in real life: en.wikipedia.org/wiki/Parabolic_reflector – YangMills Jan 16 at 0:53 add comment Divergent series, anyone? It was devil's work, just a curiosity, unorthodox idea for Euler and a strange concept for Abel, Ramanujan (Abel claiming that it can't and mustn't be used for serious calculations)... but today, we use it for "real" things. - P. S. How about CWing the question and tagging it big list? – Harun Šiljak Dec 17 '12 at 18:11 Useless concept for Euler? Is my sarcasm detector broken? – Franz Lemmermeyer Dec 17 '12 at 18:21 Done. Interesting enough, our both answers are related to Hardy, champion of "mathematics without practical use" (or at least what he hoped to be without practical use). – Harun Šiljak Dec 17 '12 at 18:22 @Harun: that's bettet. Even if the idea was very orthodox for Euler, who defended using divergent series at each and every opportunity. It wasn't orthodox for the Bernoullis, however. – Franz Lemmermeyer Dec 17 '12 at 18:29 The more I think about it, the more I think that divergent series are rather useful math that became useless than the other way round. Nevertheless I like your answer better than the question. – Franz Lemmermeyer Dec 17 '12 at 21:17 show 3 more comments Fast Fourier transform: Originally developed by Gauss in early 19th century. Gauss thought it is unworthy of publication, because there were better computational techniques. It only appeared in his collected works after his death, where nobody noticed it. Rediscovered by Cooley and Tukey, and was instantly recognized as important. See e.g. http://www.math.ethz.ch/education/bachelor/seminars/fs2008/nas/woerner.pdf - add comment Negative numbers and complex numbers were regarded as absurd and useless by many mathematicians prior to$15^{th}\$ century. For instance, Chuquet referred negative numbers as "absurd numbers." Michael Stifel has a chapter on negative numbers in his book "Arithmetica integra" titled "numeri absurdi". And so too were complex/imaginary numbers. Gerolamo Cardano in his book "Ars Magna" calls the square root of negative numbers as a completely useless object.
I guess the same attitude towards Quaternions and Octonions would have been prevalent, when they were initially discovered.
- |
Getting frequencies out of a FFT
I use this FFT and wrote a short program to test it.
package test_FFT;
import org.apache.commons.math3.transform.FastFourierTransformer;
import org.apache.commons.math3.transform.DftNormalization;
import org.apache.commons.math3.transform.TransformType;
public class FFT2 {
public static void main(String[] args) {
double[][] array=new double[2][8];
//An array of complex numbers with real part array[0][*]
//and complex part array[1][*]
//The output of the transformation will be saved in the input array
array[0][0]=5.0;
array[0][1]=2.0;
array[0][2]=3.0;
array[0][3]=4.0;
array[0][4]=5.0;
array[0][5]=6.0;
array[0][6]=7.0;
array[0][7]=9.0;
FastFourierTransformer.transformInPlace(array,DftNormalization.STANDARD,TransformType.FORWARD);
for(int i=0;i<8;i++){
System.out.println("real "+array[0][i]);
System.out.println("complex "+array[1][i]);
}
}
}
In the output array I get amplitudes of sin and cos functions. The information about the frequencies should depend on the position within the output array. After some research on this page, I still don't understand how to calculate frequencies out of array positions.
I learned that there are many flavours of how to perform a FFT. Has anyone of you detailed knowledge on how calculate frequencies out of output of the FFT I use? A code sample computing frequencies for the output of the example below would be greatly appreciated. Thank you
• I guess this should answer your question. – Matt L. Apr 1 '16 at 16:28
the magnitude is quite easy.. it is the $abs(array (real + imag)$. the phase $\phi = \arctan \frac{imag}{real}$
in this way it is common to show the half the frequency array because of it's symmetry. if you plot $array[0]$ to $array[N/2]$ you shoult get your frequecy array from $\frac{1}{N}$ to $\frac{F_s}{2}$
• should this be from $0Hz$ to $\frac{Fs}{2}$ ? – gerrgheiser Apr 1 '16 at 19:35 |
# List of all the Questions
Question:
Mark against the correct answer in each of the following:
If the plane $2 x-y+z=0$ is parallel to the line $\frac{2 x-1}{2}=\frac{2-y}{2}=\frac{z+1}{a}$, then the value of $a$ is
A. $-4$
B. $-2$
C. 4
D. 2
Question:
Mark against the correct answer in each of the following:
The equation of the plane passing through the points $A(0,-1,0), B(2,1,-1)$ and $C(1,1,1)$ is given by
A. $4 x+3 y-2 z-3=0$
B. $4 x-3 y+2 z+3=0$
C. $4 x-3 y+2 z-3=0$
D. None of these
Question:
Mark against the correct answer in each of the following:
The equation of the plane passing through the intersection of the planes $3 x-y+2 z-4=0$ and $x+y+z-2=0$ and passing through the point $A(2,2,1)$ is given by
A. $7 x+5 y-4 z-8=0$
B. $7 x-5 y+4 z-8=0$
C. $5 x-7 y+4 z-8=0$
D. $5 x+7 y-4 z+8=0$
Question:
Mark against the correct answer in each of the following:
The equation of the plane passing through the points $\mathrm{A}(2,2,1)$ and $\mathrm{B}(9,3,6)$ and perpendicular to the plane $2 x+6 y+6 z=1$, is
A. $x+2 y-3 z+5=0$
B. $2 x-3 y+4 z-6=0$
C. $4 x+5 y-6 z+3=0$
D. $3 x+4 y-5 z-9=0$
Question:
Mark against the correct answer in each of the following:
The line $\frac{x-1}{2}=\frac{y-2}{4}=\frac{z-3}{-3}$ meets the plane $2 x+3 y-z=14$ in the point
A. $(2,5,7)$
B. $(3,5,7)$
C. $(5,7,3)$
D. $(6,5,3)$
Question:
Mark against the correct answer in each of the following:
The equation of a plane through the point $A(1,0,-1)$ and perpendicular to the line $\frac{x+1}{2}=\frac{y+3}{4}=\frac{z+7}{-3}$ is
A. $2 x+4 y-3 z=3$
B. $2 x-4 y+3 z=5$
C. $2 x+4 y-3 z=5$
D. $x+3 y+7 z=-6$
Question:
Mark against the correct answer in each of the following:
If a plane meets the coordinate axes in $A, B$ and $C$ such that the centroid of $\triangle A B C$ is $(1,2,4)$, then the equation of the plane is
A. $x+2 y+4 z=6$
B. $4 x+2 y+z=12$
C. $x+2 y+4 z=7$
D. $4 x+2 y+z=7$
Question:
Mark against the correct answer in each of the following:
The plane $2 x+3 y+4 z=12$ meets the coordinate axes in $A, B$ and $C$. The centroid of $\triangle A B C$ is
A. $(2,3,4)$
B. $(6,4,3)$
C. $\left(2, \frac{4}{3}, 1\right)$
D. None of these
Question:
Mark against the correct answer in each of the following:
If the line $\frac{x-4}{1}=\frac{y-2}{1}=\frac{z-k}{2}$ lies in the plane $2 x-4 y+z=7$, then the value of $k$ is
A. $-7$
B. 7
C. 4
D. $-4$
Question:
Mark against the correct answer in each of the following:
If $O$ is the origin and $P(1,2,-3)$ is a given point, then the equation of the plane through $P$ and perpendicular to OP is
A. $x+2 y-3 z=14$
B. $x-2 y+3 z=12$
C. $x-2 y-3 z=14$
D. None of these
Question:
Mark against the correct answer in each of the following:
If the line $\frac{x+1}{3}=\frac{y-2}{4}=\frac{z+6}{5}$ is parallel to the plane $2 x-3 y+k z=0$, then the value of $k$ is
A. $\frac{5}{6}$
B. $\frac{6}{5}$
C. $\frac{3}{4}$
D. $\frac{4}{5}$
Question:
Mark against the correct answer in each of the following:
A plane cuts off intercepts $3,-4,6$ on the coordinate axes. The length of perpendicular from the origin to this plane is
A. $\frac{5}{\sqrt{29}}$ units
B. $\frac{8}{\sqrt{29}}$ units
C. $\frac{6}{\sqrt{29}}$ units
D. $\frac{12}{\sqrt{29}}$ units
Question:
Mark against the correct answer in each of the following:
The equation of a plane passing through the point $A(2,-3,7)$ and making equal intercepts on the axes, is
A. $x+y+z=3$
B. $x+y+z=6$
C. $x+y+z=9$
D. $x+y+z=4$
Question:
Mark against the correct answer in each of the following:
The length of perpendicular from the origin to the plane $\overrightarrow{\mathrm{r}} \cdot(3 \hat{\mathrm{i}}-4 \hat{\mathrm{j}}-12 \hat{\mathrm{k}})+39=0$ is
A. 3 units
B. $\frac{13}{5}$ units
C. $\frac{5}{3}$ units
D. None of these
Question:
Mark against the correct answer in each of the following:
The direction cosines of the normal to the plane $5 y+4=0$ are
A. $0, \frac{-4}{5}, 0$
B. $0,1,0$
C. $0,-1,0$
D. None of these
Question:
Mark against the correct answer in each of the following:
The direction cosines of the perpendicular from the origin to the plane $\overrightarrow{\mathrm{r}} \cdot(6 \hat{\mathrm{i}}-3 \hat{\mathrm{j}}+2 \hat{\mathrm{k}})+1=0$ are
A. $\frac{6}{7}, \frac{3}{7}, \frac{-2}{7}$
B. $\frac{6}{7}, \frac{-3}{7}, \frac{2}{7}$
C. $\frac{-6}{7}, \frac{3}{7}, \frac{2}{7}$
D. None of these
Question:
Write the equation of a plane passing through the point $(2,-1,1)$ and parallel to the plane $3 x+2 y-z=7$.
Question:
Write the angle between the line
$\frac{x-1}{2}=\frac{y-2}{1}=\frac{z+3}{-2}$ and the plane $x+y+4=0$
Question:
Find the value of $\lambda$ for which the line
$\frac{x-1}{2}=\frac{y-1}{3}=\frac{z-1}{2}$ is parallel to the plane $\bar{r} \cdot(2 \hat{\imath}+3 \hat{\jmath}+4 \hat{k})=4$
Question:
Find the length of perpendicular from the origin to the plane $\bar{r} \cdot(2 \hat{j}-3 \hat{j}+6 \hat{k})+14-0$. |
PREPRINT
DFD1B505-5003-48A5-96B8-3801F104F899
# Les Houches Lectures on Black Holes
Andy Strominger
arXiv:hep-th/9501071
Submitted on 12 January 1995
Title Les Houches Lectures on Black Holes Authors Andy Strominger Dates 1995-01-12 Subjects High Energy Physics - Theory; Astrophysics; General Relativity and Quantum Cosmology arXiv https://arxiv.org/abs/hep-th/9501071 Note Comment: 70 pages, 18 figures. Lectures presented at the 1994 Les Houches Summer School Fluctuating Geometries in Statistical Mechanics and Field Theory.'' (also available at http://xxx.lanl.gov/lh94/ )
URL https://www.baryonbib.org/bib/dfd1b505-5003-48a5-96b8-3801f104f899 First Indexed November 6, 2021 Last Updated November 6, 2021 |
# Understand Graph Attention Network¶
Authors: Hao Zhang, Mufei Li, Minjie Wang Zheng Zhang
From Graph Convolutional Network (GCN), we learned that combining local graph structure and node-level features yields good performance on node classification task. However, the way GCN aggregates is structure-dependent, which may hurt its generalizability.
One workaround is to simply average over all neighbor node features as in GraphSAGE. Graph Attention Network proposes an alternative way by weighting neighbor features with feature dependent and structure free normalization, in the style of attention.
The goal of this tutorial:
• Explain what is Graph Attention Network.
• Demonstrate how it can be implemented in DGL.
• Understand the attentions learnt.
• Introduce to inductive learning.
## Introducing Attention to GCN¶
The key difference between GAT and GCN is how the information from the one-hop neighborhood is aggregated.
For GCN, a graph convolution operation produces the normalized sum of the node features of neighbors:
$h_i^{(l+1)}=\sigma\left(\sum_{j\in \mathcal{N}(i)} {\frac{1}{c_{ij}} W^{(l)}h^{(l)}_j}\right)$
where $$\mathcal{N}(i)$$ is the set of its one-hop neighbors (to include $$v_i$$ in the set, simply add a self-loop to each node), $$c_{ij}=\sqrt{|\mathcal{N}(i)|}\sqrt{|\mathcal{N}(j)|}$$ is a normalization constant based on graph structure, $$\sigma$$ is an activation function (GCN uses ReLU), and $$W^{(l)}$$ is a shared weight matrix for node-wise feature transformation. Another model proposed in GraphSAGE employs the same update rule except that they set $$c_{ij}=|\mathcal{N}(i)|$$.
GAT introduces the attention mechanism as a substitute for the statically normalized convolution operation. Below are the equations to compute the node embedding $$h_i^{(l+1)}$$ of layer $$l+1$$ from the embeddings of layer $$l$$:
\begin{split}\begin{align} z_i^{(l)}&=W^{(l)}h_i^{(l)},&(1) \\ e_{ij}^{(l)}&=\text{LeakyReLU}(\vec a^{(l)^T}(z_i^{(l)}||z_j^{(l)})),&(2)\\ \alpha_{ij}^{(l)}&=\frac{\exp(e_{ij}^{(l)})}{\sum_{k\in \mathcal{N}(i)}^{}\exp(e_{ik}^{(l)})},&(3)\\ h_i^{(l+1)}&=\sigma\left(\sum_{j\in \mathcal{N}(i)} {\alpha^{(l)}_{ij} z^{(l)}_j }\right),&(4) \end{align}\end{split}
Explanations:
• Equation (1) is a linear transformation of the lower layer embedding $$h_i^{(l)}$$ and $$W^{(l)}$$ is its learnable weight matrix.
• Equation (2) computes a pair-wise unnormalized attention score between two neighbors. Here, it first concatenates the $$z$$ embeddings of the two nodes, where $$||$$ denotes concatenation, then takes a dot product of it and a learnable weight vector $$\vec a^{(l)}$$, and applies a LeakyReLU in the end. This form of attention is usually called additive attention, contrast with the dot-product attention in the Transformer model.
• Equation (3) applies a softmax to normalize the attention scores on each node’s in-coming edges.
• Equation (4) is similar to GCN. The embeddings from neighbors are aggregated together, scaled by the attention scores.
There are other details from the paper, such as dropout and skip connections. For the purpose of simplicity, we omit them in this tutorial and leave the link to the full example at the end for interested readers.
In its essence, GAT is just a different aggregation function with attention over features of neighbors, instead of a simple mean aggregation.
## GAT in DGL¶
Let’s first have an overall impression about how a GATLayer module is implemented in DGL. Don’t worry, we will break down the four equations above one-by-one.
import torch
import torch.nn as nn
import torch.nn.functional as F
class GATLayer(nn.Module):
def __init__(self, g, in_dim, out_dim):
super(GATLayer, self).__init__()
self.g = g
# equation (1)
self.fc = nn.Linear(in_dim, out_dim, bias=False)
# equation (2)
self.attn_fc = nn.Linear(2 * out_dim, 1, bias=False)
def edge_attention(self, edges):
# edge UDF for equation (2)
z2 = torch.cat([edges.src['z'], edges.dst['z']], dim=1)
a = self.attn_fc(z2)
return {'e': F.leaky_relu(a)}
def message_func(self, edges):
# message UDF for equation (3) & (4)
return {'z': edges.src['z'], 'e': edges.data['e']}
def reduce_func(self, nodes):
# reduce UDF for equation (3) & (4)
# equation (3)
alpha = F.softmax(nodes.mailbox['e'], dim=1)
# equation (4)
h = torch.sum(alpha * nodes.mailbox['z'], dim=1)
return {'h': h}
def forward(self, h):
# equation (1)
z = self.fc(h)
self.g.ndata['z'] = z
# equation (2)
self.g.apply_edges(self.edge_attention)
# equation (3) & (4)
self.g.update_all(self.message_func, self.reduce_func)
return self.g.ndata.pop('h')
### Equation (1)¶
$z_i^{(l)}=W^{(l)}h_i^{(l)},(1)$
The first one is simple. Linear transformation is very common and can be easily implemented in Pytorch using torch.nn.Linear.
### Equation (2)¶
$e_{ij}^{(l)}=\text{LeakyReLU}(\vec a^{(l)^T}(z_i^{(l)}|z_j^{(l)})),(2)$
The unnormalized attention score $$e_{ij}$$ is calculated using the embeddings of adjacent nodes $$i$$ and $$j$$. This suggests that the attention scores can be viewed as edge data which can be calculated by the apply_edges API. The argument to the apply_edges is an Edge UDF, which is defined as below:
def edge_attention(self, edges):
# edge UDF for equation (2)
z2 = torch.cat([edges.src['z'], edges.dst['z']], dim=1)
a = self.attn_fc(z2)
return {'e' : F.leaky_relu(a)}
Here, the dot product with the learnable weight vector $$\vec{a^{(l)}}$$ is implemented again using pytorch’s linear transformation attn_fc. Note that apply_edges will batch all the edge data in one tensor, so the cat, attn_fc here are applied on all the edges in parallel.
### Equation (3) & (4)¶
\begin{split}\begin{align} \alpha_{ij}^{(l)}&=\frac{\exp(e_{ij}^{(l)})}{\sum_{k\in \mathcal{N}(i)}^{}\exp(e_{ik}^{(l)})},&(3)\\ h_i^{(l+1)}&=\sigma\left(\sum_{j\in \mathcal{N}(i)} {\alpha^{(l)}_{ij} z^{(l)}_j }\right),&(4) \end{align}\end{split}
Similar to GCN, update_all API is used to trigger message passing on all the nodes. The message function sends out two tensors: the transformed z embedding of the source node and the unnormalized attention score e on each edge. The reduce function then performs two tasks:
• Normalize the attention scores using softmax (equation (3)).
• Aggregate neighbor embeddings weighted by the attention scores (equation(4)).
Both tasks first fetch data from the mailbox and then manipulate it on the second dimension (dim=1), on which the messages are batched.
def reduce_func(self, nodes):
# reduce UDF for equation (3) & (4)
# equation (3)
alpha = F.softmax(nodes.mailbox['e'], dim=1)
# equation (4)
h = torch.sum(alpha * nodes.mailbox['z'], dim=1)
return {'h' : h}
Analogous to multiple channels in ConvNet, GAT introduces multi-head attention to enrich the model capacity and to stabilize the learning process. Each attention head has its own parameters and their outputs can be merged in two ways:
$\text{concatenation}: h^{(l+1)}_{i} =||_{k=1}^{K}\sigma\left(\sum_{j\in \mathcal{N}(i)}\alpha_{ij}^{k}W^{k}h^{(l)}_{j}\right)$
or
$\text{average}: h_{i}^{(l+1)}=\sigma\left(\frac{1}{K}\sum_{k=1}^{K}\sum_{j\in\mathcal{N}(i)}\alpha_{ij}^{k}W^{k}h^{(l)}_{j}\right)$
where $$K$$ is the number of heads. The authors suggest using concatenation for intermediary layers and average for the final layer.
We can use the above defined single-head GATLayer as the building block for the MultiHeadGATLayer below:
class MultiHeadGATLayer(nn.Module):
def __init__(self, g, in_dim, out_dim, num_heads, merge='cat'):
for i in range(num_heads):
self.merge = merge
def forward(self, h):
if self.merge == 'cat':
# concat on the output feature dimension (dim=1)
else:
# merge using average
### Put everything together¶
Now, we can define a two-layer GAT model:
class GAT(nn.Module):
def __init__(self, g, in_dim, hidden_dim, out_dim, num_heads):
super(GAT, self).__init__()
# Be aware that the input dimension is hidden_dim*num_heads since
# multiple head outputs are concatenated together. Also, only
# one attention head in the output layer.
self.layer2 = MultiHeadGATLayer(g, hidden_dim * num_heads, out_dim, 1)
def forward(self, h):
h = self.layer1(h)
h = F.elu(h)
h = self.layer2(h)
return h
We then load the cora dataset using DGL’s built-in data module.
from dgl import DGLGraph
from dgl.data import citation_graph as citegrh
features = torch.FloatTensor(data.features)
labels = torch.LongTensor(data.labels)
g = data.graph
# add self loop
g.remove_edges_from(g.selfloop_edges())
g = DGLGraph(g)
return g, features, labels, mask
The training loop is exactly the same as in the GCN tutorial.
import time
import numpy as np
# create the model, 2 heads, each head has hidden size 8
net = GAT(g,
in_dim=features.size()[1],
hidden_dim=8,
out_dim=7,
# create optimizer
optimizer = torch.optim.Adam(net.parameters(), lr=1e-3)
# main loop
dur = []
for epoch in range(30):
if epoch >= 3:
t0 = time.time()
logits = net(features)
logp = F.log_softmax(logits, 1)
loss.backward()
optimizer.step()
if epoch >= 3:
dur.append(time.time() - t0)
print("Epoch {:05d} | Loss {:.4f} | Time(s) {:.4f}".format(
epoch, loss.item(), np.mean(dur)))
Out:
Epoch 00000 | Loss 1.9457 | Time(s) nan
Epoch 00001 | Loss 1.9450 | Time(s) nan
Epoch 00002 | Loss 1.9442 | Time(s) nan
Epoch 00003 | Loss 1.9434 | Time(s) 0.1133
Epoch 00004 | Loss 1.9426 | Time(s) 0.1140
Epoch 00005 | Loss 1.9418 | Time(s) 0.1142
Epoch 00006 | Loss 1.9410 | Time(s) 0.1146
Epoch 00007 | Loss 1.9401 | Time(s) 0.1144
Epoch 00008 | Loss 1.9393 | Time(s) 0.1142
Epoch 00009 | Loss 1.9384 | Time(s) 0.1141
Epoch 00010 | Loss 1.9376 | Time(s) 0.1140
Epoch 00011 | Loss 1.9367 | Time(s) 0.1139
Epoch 00012 | Loss 1.9358 | Time(s) 0.1138
Epoch 00013 | Loss 1.9349 | Time(s) 0.1136
Epoch 00014 | Loss 1.9340 | Time(s) 0.1136
Epoch 00015 | Loss 1.9330 | Time(s) 0.1137
Epoch 00016 | Loss 1.9321 | Time(s) 0.1133
Epoch 00017 | Loss 1.9311 | Time(s) 0.1129
Epoch 00018 | Loss 1.9301 | Time(s) 0.1124
Epoch 00019 | Loss 1.9291 | Time(s) 0.1120
Epoch 00020 | Loss 1.9281 | Time(s) 0.1116
Epoch 00021 | Loss 1.9270 | Time(s) 0.1113
Epoch 00022 | Loss 1.9260 | Time(s) 0.1111
Epoch 00023 | Loss 1.9249 | Time(s) 0.1110
Epoch 00024 | Loss 1.9238 | Time(s) 0.1109
Epoch 00025 | Loss 1.9227 | Time(s) 0.1109
Epoch 00026 | Loss 1.9215 | Time(s) 0.1107
Epoch 00027 | Loss 1.9204 | Time(s) 0.1105
Epoch 00028 | Loss 1.9192 | Time(s) 0.1103
Epoch 00029 | Loss 1.9180 | Time(s) 0.1103
## Visualizing and Understanding Attention Learnt¶
### Cora¶
The following table summarizes the model performances on Cora reported in the GAT paper and obtained with dgl implementations.
Model Accuracy
GCN (paper) $$81.4\pm 0.5%$$
GCN (dgl) $$82.05\pm 0.33%$$
GAT (paper) $$83.0\pm 0.7%$$
GAT (dgl) $$83.69\pm 0.529%$$
What kind of attention distribution has our model learnt?
Because the attention weight $$a_{ij}$$ is associated with edges, we can visualize it by coloring edges. Below we pick a subgraph of Cora and plot the attention weights of the last GATLayer. The nodes are colored according to their labels, whereas the edges are colored according to the magnitude of the attention weights, which can be referred with the colorbar on the right.
You can that the model seems to learn different attention weights. To understand the distribution more thoroughly, we measure the entropy) of the attention distribution. For any node $$i$$, $$\{\alpha_{ij}\}_{j\in\mathcal{N}(i)}$$ forms a discrete probability distribution over all its neighbors with the entropy given by
$H({\alpha_{ij}}_{j\in\mathcal{N}(i)})=-\sum_{j\in\mathcal{N}(i)} \alpha_{ij}\log\alpha_{ij}$
Intuitively, a low entropy means a high degree of concentration, and vice versa; an entropy of 0 means all attention is on one source node. The uniform distribution has the highest entropy of $$\log(\mathcal{N}(i))$$. Ideally, we want to see the model learns a distribution of lower entropy (i.e, one or two neighbors are much more important than the others).
Note that since nodes can have different degrees, the maximum entropy will also be different. Therefore, we plot the aggregated histogram of entropy values of all nodes in the entire graph. Below are the attention histogram of learned by each attention head.
As a reference, here is the histogram if all the nodes have uniform attention weight distribution.
One can see that the attention values learned is quite similar to uniform distribution (i.e, all neighbors are equally important). This partially explains why the performance of GAT is close to that of GCN on Cora (according to author’s reported result, the accuracy difference averaged over 100 runs is less than 2%); attention does not matter since it does not differentiate much any ways.
Does that mean the attention mechanism is not useful? No! A different dataset exhibits an entirely different pattern, as we show next.
### Protein-Protein Interaction (PPI) networks¶
The PPI dataset used here consists of $$24$$ graphs corresponding to different human tissues. Nodes can have up to $$121$$ kinds of labels, so the label of node is represented as a binary tensor of size $$121$$. The task is to predict node label.
We use $$20$$ graphs for training, $$2$$ for validation and $$2$$ for test. The average number of nodes per graph is $$2372$$. Each node has $$50$$ features that are composed of positional gene sets, motif gene sets and immunological signatures. Critically, test graphs remain completely unobserved during training, a setting called “inductive learning”.
We compare the performance of GAT and GCN for $$10$$ random runs on this task and use hyperparameter search on the validation set to find the best model.
Model F1 Score(micro)
GAT $$0.975 \pm 0.006$$
GCN $$0.509 \pm 0.025$$
Paper $$0.973 \pm 0.002$$
The table above is the result of this experiment, where we use micro F1 score to evaluate the model performance.
Note
Below is the calculation process of F1 score:
\begin{align}\begin{aligned}precision=\frac{\sum_{t=1}^{n}TP_{t}}{\sum_{t=1}^{n}(TP_{t} +FP_{t})}\\recall=\frac{\sum_{t=1}^{n}TP_{t}}{\sum_{t=1}^{n}(TP_{t} +FN_{t})}\\F1_{micro}=2\frac{precision*recall}{precision+recall}\end{aligned}\end{align}
• $$TP_{t}$$ represents for number of nodes that both have and are predicted to have label $$t$$
• $$FP_{t}$$ represents for number of nodes that do not have but are predicted to have label $$t$$
• $$FN_{t}$$ represents for number of output classes labeled as $$t$$ but predicted as others.
• $$n$$ is the number of labels, i.e. $$121$$ in our case.
During training, we use BCEWithLogitsLoss as the loss function. The learning curves of GAT and GCN are presented below; what is evident is the dramatic performance adavantage of GAT over GCN.
As before, we can have a statistical understanding of the attentions learnt by showing the histogram plot for the node-wise attention entropy. Below are the attention histogram learnt by different attention layers.
Attention learnt in layer 1:
Attention learnt in layer 2:
Attention learnt in final layer:
Again, comparing with uniform distribution:
Clearly, GAT does learn sharp attention weights! There is a clear pattern over the layers as well: the attention gets sharper with higher layer.
Unlike the Cora dataset where GAT’s gain is lukewarm at best, for PPI there is a significant performance gap between GAT and other GNN variants compared in the GAT paper (at least 20%), and the attention distributions between the two clearly differ. While this deserves further research, one immediate conclusion is that GAT’s advantage lies perhaps more in its ability to handle a graph with more complex neighborhood structure.
## What’s Next?¶
So far, we demonstrated how to use DGL to implement GAT. There are some missing details such as dropout, skip connections and hyper-parameter tuning, which are common practices and do not involve DGL-related concepts. We refer interested readers to the full example.
• See the optimized full example here.
• Stay tune for our next tutorial about how to speedup GAT models by parallelizing multiple attention heads and SPMV optimization.
Total running time of the script: ( 0 minutes 7.541 seconds)
Gallery generated by Sphinx-Gallery |
# linear phase notch filter matlab
I need to filter 9 Hz from a signal sampled by 256 Hz using a linear phase filter.
If someone can bring an explanation or a code example I would greatly appreciate it.
I have tested his code
function [Hd1,Hd2]=NotchFIR(FS,F0,FG,FGAINdB) % Design
if (F0 > FS/2)
warnstr =['WARNING! The Sampling frequency is less than twice the notch frequency. Response will be inaccurate.']
end
if (FG == F0)
warnstr =['WARNING! User should not set the gain of the notch frequency. Response will be inaccurate.']
end
FGAIN = 10^(FGAINdB/20);
w0 = 2*pi*F0/FS;
wg = 2*pi*FG/FS;
GH = abs(1 - 2*cos(w0)*exp(-j*wg) + exp(-j*2*wg));
G = FGAIN/GH;
h = [1 -2*cos(w0) 1];
freqz(G*h,100,[], FS);
Hd1=G*h;
Hd2=100; end
t=0:1/256:1;
y=sin(2*pi*9*t); [Hd1,Hd2]=NotchFIR(256,8.9,9.1,1)
x=filter(Hd1,Hd2,y);
figure
plot(t,x);
But the frequency response of the digital filter shows that the selection of the notch frequency wasn't sharp and many frequencies were attenuated.
• So, what exactly is your question that we can answer? Please edit your question to actually include a question sentence, and explain what you've researched and got in trouble with! Dec 11 '16 at 14:04
• You need to design a finite impulse response (FIR) notch filter where the center of the notch (also called "stopband") is nine Hz. Measured in Hz, what is the width of your desired notch? Measured in decibels (dB), what is the desired notch attenuation? Measured in dB, what is the allowable passband peak-to-peak ripple? Dec 11 '16 at 15:16
• - The bandwidth must be tight, 8.9Hz ~ 9.1Hz should be fine. - 80 dB attenuation - 0.01 dB peak-to-peak ripple , thank you
– omar
Dec 11 '16 at 21:25
• I have tested this code but the result was not satisfactory : 'Sr=256; fdfr1=8.9; fdfr2=9.1; fc1 = fdfr1/Sr; fc2 = fdfr2/Sr; N = 10; % taps n = -(N/2):(N/2); % order filter n = n+(n==0)*eps; [h] = sin(n*2*pifc1)./(npi) - sin(n*2*pifc2)./(npi); [w] = 0.54 + 0.46*cos(2*pi*n/N); % window for betterment d = h.*w; % better coefficients freqz(d); % use this command to see frequency response t=0:1/256:1; X=sin(2*pi*9*t); y = filter(d,1,X) % X is input, y is filtered output figure plot(t,y);'
– omar
Dec 12 '16 at 10:54 |
QUESTION
# ECE 353 Week 1 DQ 2 Addressing Bias in Intelligence Testing
This pack of ECE 353 Week 1 Discussion Questions 2 Addressing Bias in Intelligence Testing shows the solutions to the following problems:
Daniel, an eight-year-old child from a poor, inner-city neighborhood, is given a traditional IQ test. One section of the IQ test involves defining various words.
Files: ECE-353 Week 1 DQ 2 Addressing Bias in Intelligence Testing.zip
• @
Tutor has posted answer for $5.19. See answer's preview$5.19 |
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Broadband single molecule SERS detection designed by warped optical spaces
## Abstract
Engineering hotspots is of crucial importance in many applications including energy harvesting, nano-lasers, subwavelength imaging, and biomedical sensing. Surface-enhanced Raman scattering spectroscopy is a key technique to identify analytes that would otherwise be difficult to diagnose. In standard systems, hotspots are realised with nanostructures made by acute tips or narrow gaps. Owing to the low probability for molecules to reach such tiny active regions, high sensitivity is always accompanied by a large preparation time for analyte accumulation which hinders the time response. Inspired by transformation optics, we introduce an approach based on warped spaces to manipulate hotspots, resulting in broadband enhancements in both the magnitude and volume. Experiments for single molecule detection with a fast soaking time are realised in conjunction with broadband response and uniformity. Such engineering could provide a new design platform for a rich manifold of devices, which can benefit from broadband and huge field enhancements.
## Introduction
Achieving biochemical sensing with both ultrafast speed and detection limits down to single-molecule level is highly desirable in a plethora of applications, and in particular for those related to real-time environmental monitoring and pathogens and protein recognition for disease diagnosis. Surface-enhanced Raman scattering (SERS) spectroscopy is one of the most powerful analytical strategies used to date for the identification of chemical fingerprints under ambient conditions, by virtue of greatly enhanced inelastic Raman scattering from minute amounts of substance deposited near nanostructured metallic surfaces1,2,3,4. Many different SERS systems have been developed to achieve a large enhancement factor (EF) and good sensitivity, ranging from nanoparticle (NP) aggregate/oligomer5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21 to 2D/3D structured surfaces22,23,24,25,26,27 and hybrid structures28,29,30,31,32,33,34. Despite obtaining a high EF level beyond 107, a comparatively long soaking time for sample preparation is needed, ranging from one hour to tens of hours18,22,23,25,28,30, owing to the tiny volume of regions with strong electromagnetic energy localisation. The effective volume only accounts for a tiny fraction of the total volume of the real device6,12,17, limiting the probability for the analytes to reach the region where energy hotspots exist. Such a slow process substantially hinders SERS in all practical applications that require stringent requirement on both sensitivity and time response, ranging from environment monitoring to hazardous pollutants and disease diagnosis in clinics. In this article, inspired by the recent development in transformation optics (TO), we design and implement a SERS spectroscopy system that possesses a large broadband sensitivity and fast time response at the same time. By virtue of the equivalence of light propagation between media containing gradients in optical properties and warped geometries of spacetime35,36,37,38,39, we here propose to leverage the curvature of the space to engineer electromagnetic hotspots that can substantially enhance both the effective volume and the magnitude of the field, hence improving the Raman response of an analyte in a broad spectrum. As a result of this approach, the number of molecules that can access the region with high EF significantly increases, thus enabling fast SERS detection with high sensitivity. In our samples realised with gas-phase cluster beam deposition40, we observed a 20-fold enhancement of Raman signal compared to a reference flat substrate, reporting single-molecule detectability with short soaking time of 60 s. The measured EF reaches average value beyond 108 inside a broadband in visible, and exhibits comparable repeatability and uniformity34,41,42. Employing warped spatial geometries as an additional degree of freedom in hotspot engineering may offer new strategies towards design principles for applications also in nonlinear optics43, plasmonic lasers44 and hot-electrons45.
## Results
### Sample design and nanofabrication
The starting point of our design is from a very simple configuration, represented by a plasmonic NP on top of a flat substrate in contact with air. In this configuration the electromagnetic field cannot be confined horizontally, owing to the homogenous refractive index nair = 1. Hypothetically, if an additional spatial variation of refractive index is induced in the vicinity of the substrate, the electromagnetic energy could be trapped and consequently boost the electromagnetic energy in the proximity of the NP (see more details in Supplementary Note 1). However, inducing a prominent variation of nair (such as the one shown in left panel of Fig. 1a) seem impossible to realise for a homogeneous material. However, with the aid of TO, we can create an equivalent structure that can give the illusion to light to propagate in such a material with a spatially varying refractive index, thus overcoming this challenge and inducing an extremely high energy localisation. This task is accomplished by warping the space (x′, y′, z′) by a coordinate transformation (x, y, z) = Ω (x′, y′, z′), as illustrated in the right panel of Fig. 1a. In the real (transformed) space (x, y, z), the dynamics of light is described by the same set of Maxwell’s equations that models photon dynamics in the original (virtual) space (x′, y′, z′), but with a new homogeneous medium above the substrate with refractive index n(x, y, z) = 1. The left and right configurations of Fig. 1a are optically equivalent. Details on the demonstration of this conditions can be found in refs. 38,39, as well as in the Supplementary Note 2.
The warped substrate with a constant curvature κ (Fig. 1c) is a practical structure, which can strongly localise the electromagnetic waves due to the induced effective refractive index gradient. For the quantitative demonstration of this effect, we implement 3D full-wave simulations based on the finite difference method in time domain (FDTD). Figure 1b–e compare the magnitude of the electric field |E| for a metallic NP lying on the top of (a, b) flat and (c, d) warped substrate under normal illumination of a monochromatic light with wavelength λ = 345 nm. We select the radius of the Ag NP as r = 30 nm, and the curvature of the substrate as κ = 2 µm−1. A silica spacer with 10 nm thickness is inserted between the NP and the Au substrate. A hotspot is formed around the NP with enhanced electric field in a quite limited area (Fig. 1b, c). Figure 1d, e illustrates the situation when a NP lies on a warped surface formed by an arc with inscribed angle of π. Interestingly, the electric field around the NP experiences an immense enhancement in the warped substrate to form a larger and brighter hotspot, matching our qualitative prediction from TO in Fig. 1a. Quantitative comparison of the size of hotspot between flat and warped substrate can be found in Supplementary Note 4. A more quantitative comparison is obtained by calculating the spatially averaged electric field $$\overline {|{\mathbf{E}}|} = \mathop {\int}\limits_A {\left| {\mathbf{E}} \right|{{\mathrm{d}}x{\mathrm{d}}y{\mathrm{d}}z/}\mathop {\int}\limits_A {{{\mathrm{d}}x{\mathrm{d}}y{\mathrm{d}}z}} }$$, where A is the volume with refractive index n(x, y, z) = nair covered by a square region of length 2.4r (green dashed lines in Fig. 1a, c). The ratio between the averaged field in warped and flat substrate$$\gamma = \overline {|{\mathbf{E}}|} _{{\mathrm{curv}}}{\mathrm{/}}\overline {|{\mathbf{E}}|} _{{\mathrm{flat}}}$$ provides a measure of the averaged field EF. We achieved an averaged EF of around 300% for the electric field in the air around the NP in warped space. We further implemented a number of simulations with different NP radius r and wavelength λ for a comprehensive investigation and the results are shown in Fig. 1f. Our warped structure achieves broadband enhancement in the visible region for all NPs with different radius r despite the fluctuations of the averaged electric fields for NPs with different plasmonic resonances (see Supplementary Note 3 for more details). This remarkable enhancement effect results from the fact that the gradient of neff induced by curvature is independent of the wavelength37. The situation when the NP deviates from the centre of nanobowl (NB) is investigated in the Supplementary Note 5.
Based on this mechanism of warped spatial coordinates, we designed a SERS spectroscopy device by applying our model to a warped 3D structure with a constant radial curvature in the space. Multiple NPs are utilised to improve the localisation of light by interparticle coupling, with hotspots further enhanced by the warped substrate (see Supplementary Note 6 for more details). Figure 2a is a schematic illustration of the designed SERS system-a 3D hierarchical nanostructure of NPs in warped substrate (NP-on-WS) formed by NBs. Our hybrid SERS substrate is composed of a monolayer of Ag clusters deposited inside highly-ordered Au NB arrays. Inside NBs, each NP strongly confines light around, significantly enhancing the sensitivity of SERS in the vicinity of the NP.
The substrate is fabricated by combining template metal deposition and gas-phase cluster beam deposition, as shown schematically in Fig. 2b (see Methods for more details). Figure 2c,d shows scanning electron microscope (SEM) images of Au NBs fabricated from silica template, revealing close-packed arrays that can efficiently provide NP with warped spaces. To obtain a more precise control of the number and location of the NPs, we perform gas-phase cluster beam technique instead of colloidal self-assembly, to obtain a homogeneous coverage with the maximisation of the density of the hotspots in the sensor. Figure 2e, f shows typical SEM images of the final NP-on-WS hierarchical structures, with Ag NPs uniformly distributed on the entire inner surface of the Au NBs. We implemented a statistical analysis of the diameter distribution of Ag NPs based on SEM images, acquiring a logarithmic normal distribution with a mean value of 48 nm and a standard deviation of 10.4 nm (more details in Supplementary Note 8). Despite the fluctuation of radii, all NPs reach similar level of field enhancement by the curvature as shown in Fig. 1e.
### Warped space SERS device characterisation
We investigate the sensing capability of the prepared NP-on-WS hierarchical nanostructure array in detecting a typical organic analyte—R6G. We immersed the samples into methanol solution of R6G for a short period (60 s). Details of the sample preparation and SERS measurements are shown in the methods. We implement the SERS measurements at different wavelengths to investigate the broadband field enhancement predicted in the previous section. Figure 3a–c illustrates the Raman spectra with pump lasers at 473, 514 and 633 nm respectively, for NP-on-WS, NPs on flat substrate (NP-on-FS) as a control, and a reference of Au film with SEM images presented in Fig. 3d. The positions of the characteristic peaks of R6G including 613, 775, 1187, 1309, 1360, 1506, 1569 and 1648 cm−1 are in agreement with previously reported studies6,19,23,30,34. For quantitative analysis, we compute the globally averaged field EF based on the data from Fig. 3a–c. Rather than making the standard assumption that Raman scattering originates only from a monolayer of molecules that cover the effective surface of the nanostructures15,22,25,29,30,31,32,34, we here use a measure characterised by the averaged EF that originates for all the molecules in the system. This measure provide a better matching of a realistic situation and also eliminates any inaccuracy resulting from the estimation of the active surface area of each nanostructure and the cross-sections of the anisotropic molecules. The globally averaged field EF is defined as a simple normalisation of Raman intensity46:
$$\overline {EF} = \frac{I}{{I_0}}\frac{{C_0}}{C}\frac{{P_0}}{P}{,}$$
where I is the Raman intensity, C the molar concentration of the solution and P the laser power respectively. I0, C0 and P0 are the corresponding values of the baseline with a pure Au film. Remarkably, we experimentally achieve a broadband EF beyond 108 at 1360 cm−1 for NP-on-WS, as summarised in Table 1. To disentangle the impact of absorption/scattering cross-section of the NPs and analytes at different wavelengths, we define a parameter αI = INP-on-WS/INP-on-FS that purely represents the enhancement due to the curved substrate, as demonstrated in Table 1. INP-on-WS and INP-on-FS are the Raman peak intensities at 1360 cm−1 for NP-on-WS and NP-on-FS under same molecule concentration and laser power. The spectra for 785 nm laser can be found in Supplementary Note 9. A prominent improvement is observed from the value of αI, implying the corporative effect of both enlarged size and enhanced intensity of the hot region around the NP induced by the warped substrate. The curved substrate not only increases the NPs density by a factor of 1.6 (see Supplementary Note 15), more importantly, induces the effective gradient of refractive index to achieve 20× boost at 473, 633 and 785 nm. The degradation of the αI at 514 nm pump qualitatively agrees with the simulation results in Fig. 1f for |E| of single NP at bottom. However, the enhancement still exists due to the contributions from the NPs away from the centre of the bowl. Detailed explanation with simulation on the fourth power of surface electric field that proportional to the Raman signal intensity can be found in Supplementary Note 5. Such broadband response provides flexibility for selecting the pump laser to maximise the cross-section of the analyte and consequently further improving the sensitivity. To demonstrate the generality of the curved induced field enhancement, we also investigate the SERS system with different NP densities, showing a nearly homogeneously enhancement with the value of αI around 20. The EF estimation with single-layer coverage is also investigated with similar value of αI. See Supplementary Note 14 for more details. Additional evidence for the curvature-induced field enhancement is demonstrated in Supplementary Note 17.
To demonstrate the potential application of our warped space sensor as an ultra-sensitive SERS chemical detector, measurements are performed for samples immersed in R6G solutions of different concentrations from 10−6 M down to 10−14 M, for only 60 s. Figure 3e presents typical spectra for different R6G concentrations. In spite of the intensity decrease at more dilute concentrations, unambiguous signatures can be straightforwardly distinguished in the SERS spectra even at the extremely low concentration of 10−12 M. The detection limit can be further improved by other means, such as increasing the integration time and/or the laser power. For the quantitative analysis, we select two Raman resonances at 613 and 1360 cm−1, which correspond to the in-plane vibration of C–C–C bonds and deformed C–C bonds, respectively23. The dependence of peak intensities on R6G concentration is plotted in Fig. 3f. In a broad concentration range, the figure demonstrates a good linear relationship in log–log scale between the concentration and Raman intensity with close-to-unit coefficient of determination R2 (0.9801 for 613 and 0.9838 for 1360 cm−1), providing the potential for label-free quantitative detection of chemicals.
We investigated the temporal dependence of Raman scattering utilising extremely diluted solution with concentration of 10−12 M, as shown in Fig. 3g–h. In Fig. 3g, SERS spectra of R6G are plotted with different soaking times. Strong Raman signals are observed unambiguously even within 10 s, as verified by the dependence of the peak intensities over the time in Fig. 3h. Remarkably, our SERS system does not require long preparation time, benefited from the increased probability of molecules reaching the enlarged active region.
To experimentally confirm the single-molecule sensitivity for our system, we apply bi-analyte approach that is based on spectroscopic contrast between two different kinds of SERS-active molecules47,48,49,50. We select the two molecules as R6G and crystal violet (CV), which are mixed and deposit on the SERS substrate with soaking time of 60 s. We obtain 1600 spectra from 2D Raman scanning on substrate within a square area of 20 µm length, as illustrated in Fig. 4a, b. The intensity in Fig. 4a is the mapping at 613 cm−1 corresponding to peaks of R6G while the intensity in Fig. 4b is the mapping at 414 cm−1 for peaks of CV. Figure 4c demonstrates the typical bi-analyte spectra for three different events. In contrast to more concentrated solutions of the two analytes which should always yield different mixed spectra, the collected spectra are dominated by either one analyte (red/green solid line) or mixture of one R6G and one CV molecule (blue solid line), or no molecules detected at all (null event). The ratio among R6G: mixed: CV is 4: 1: 4.8 (as shown in Fig. 4d), matching the ratio previously reported for single-molecule sensitivity47,48. Besides the analysis above based on empirical event counting, we also implement a more rigorous statistical analysis based modified principal component analysis (MPCA)49,50,51, whose results are independent of the ratio between the number of molecules and the nanostructures and cross-section difference of the active molecules. MPCA typically uses two principal components to represent the data set (the spectra), as shown in Fig. 4e. Two batches of spectra are clearly classified as R6G (red circles) and CV (green circles) events correspondingly, with mixed events (blue circles) between the axes and null event locating near the origin (black circles). The probability of single-molecule signature is plotted in the form of a histogram where the noise (null events) is excluded, as shown in Fig. 4f. Dominant and equal contributions from the single-dye-signal events at p 0 (CV events) and p 1 (R6G events) is observed, verifying the single-molecule detection regime for our system. More details for the MPCA approach can be found in the Supplementary Note 10.
Besides sensitivity and rapid time responses, the uniformity of the active substrate is another beneficial factors for a desired SERS spectroscopy in practical applications. We performed Raman mapping our SERS systems and acquire reproducible and stable spectra with a relative standard deviation of 6.82% at the concentration of 10−6 M and 14.08% at the concentration of 10−12 M, with more details in the Supplementary Note 11.
## Discussion
By exploiting the equivalence of light propagation between spatio-temporal geometries and materials with variation of refractive index, we designed and characterised a SERS device based on warped spaces that strongly enhance broadband electromagnetic energy over a relatively large spectral region. Such warped induced broadband field enhancement is experimentally observe in SERS spectroscopy experiments with NPs of different sizes deposited on warped substrate fabricated hierarchically with gas-phase deposition, which shows a 20-fold enhancement compared with a classic control system made by NPs deposited on a flat substrate. In general, a trade-off between the time response and sensitivity always exists for the SERS detection, owing to the fact that it always takes some time for the analyte to achieve the hotspots. The substrate based SERS with long soaking time can achieve high EF for single-molecule level detection while the NP based system can be readily merged with the analyte even for in situ detection11,52,53 at the cost of reduced sensitivity. Taking advantage of the power from TO, we mitigate the stringent constraint between time and detectability, achieving single-molecule detection with only 60 s soaking time. Besides, owing to the versatility of both fabricating different types of NPs with gas-phase deposition technology and the pump laser wavelength, our system can be feasibly adapted for label-free detection of many different types of molecules and analytes. This can be particularly applied to proteins such as EGFR and HER2/neu, with the size beyond tens of nanometres, which are used in early cancer screening diagnoses. See more details of SERS protein detection in Supplementary Note 16.
Not limited to biochemical sensing, hotspots engineering with warped spaces with high-broadband sensitivity and large volume can boost the development of many different applications of crucial importance in many fields, including nonlinear harmonic generation, plasmonic laser and hot-electrons, where new structures can be engineered by equivalent systems where the existence of specific media is substituted by suitably defined warped spatial geometries. As discussed in this specific example, this approach can overcome challenges that would otherwise seem impossible to address, thus realising a class of high performing devices for a manifold of real-world applications. Meanwhile, for more sophisticated structures with multilayer structures, the integrating the device into the curved substrate may be a technical challenge to overcome. And the enhancement variation across the spectrum may require additional judicious design to achieve the optimal broadband performance.
## Methods
### Numerical calculations of curvature-induced field enhancement
FDTD calculations with commercial software (FDTD solution, Lumerical Inc.) are used to simulate the linear response of silver NPs on flat/warped Au substrate with 10 nm thick silica spacer. The optical constants of gold and silver are taken from ref. 54. For the warped case, curvature of the substrate is chosen as constant value κ = 2 µm−1, corresponding to a circle with radius of 500 nm. And the inscribed angle of the substrate is select as π, i.e., half of the circle. An electric field is applied with polarisation along x.
### Sample fabrication
Au NB array was prepared by the template metal deposition method. Gold layer with thickness of 50 nm was deposited by vacuum evaporation on a sacrificial 2D hexagonally close-packing colloidal crystal template, making the microbeads hemispherically covered by gold. Then the silica template was etched by using hydrogen fluoride acid to leave the Au NBs in the solution. And the upward interconnected Au NB arrays can be prepared via a transferring process. A 10 nm thick SiO2 thin film uniformly covered the entire surfaces of the Au NBs by performing e-beam evaporation. Then, gas-phase cluster beam deposition process is used to deposit Ag NPs on the inner wall surface of Au/SiO2 NB structure. A silver plate with high purity (99.99%) is used as the sputtering target. A DC power supply was used for the sputtering of Ag target in argon gas ambient with a pressure of 100 Pa, maintained by passing argon gas to the liquid nitrogen-cooled aggregation tube. Sputtered Ag atoms lost energy by colliding with the cooled argon gas in the aggregation tube and formed NPs. The NPs were swept by the gas stream into high vacuum through a nozzle and a skimmer, forming a collimated NP beam with a high speed of 1000 m/s, and then deposited on the surface of substrates. The deposition was carried out at a rate of 0.5Å /s for 10 min. In situ annealing was carried out for 10 min at 150 °C. More details of the fabrication process can be found in Supplementary Notes 7 and 8.
### SERS measurement
For Raman measurements, SERS substrates were soaked in R6G-ethanol, CV-ethanol and R6G/CV-ethanol solution for different time and then remove them from the solution vertically and then dried under flowing N2. For quantitative estimation of the EF of the NP-on-WS nanostructure, a 100 mM R6G-in-methanol solution was used to prepare a reference sample. For the reference sample, a drop (around 10 µL) of 100 mM R6G-in-methanol solution was dispersed onto a flat Au film surface supported with silica slice. For Raman spectrum measurement lasers with different wavelengths (473, 514, 633 and 785 nm) (through a 100× objective lens) was employed with a beam spot size of 700 nm in diameter. The Raman spectra are recorded from an upright-configured confocal microscopy (NT-MDA NTEGRA SPECTRA and HR Evolution HORIBA JOBIN YVON). The laser power used is 1, 0.1, and 0.01 mW for different structures. More details of Raman characterisation can be found in Supplementary Note 9. Raman mappings are performed in sample-scanning mode with a step of 0.5 µm per point and a dwell time of 1 s per point.
## Data availability
The data that support the findings of this study are available from the corresponding author upon request.
## References
1. 1.
Nie, S. & Emory, S. R. Probing single molecules and single nanoparticles by surface-enhanced Raman scattering. Science 275, 1102–1106 (1997).
2. 2.
Kneipp, K. et al. Single molecule detection using surface-enhanced Raman scattering (sers). Phys. Rev. Lett. 78, 1667–1670 (1997).
3. 3.
Schlücker, S. Surface-enhanced Raman spectroscopy: concepts and chemical applications. Angew. Chem. Int. Ed. 53, 4756–4795 (2014).
4. 4.
Alvarez-Puebla, R. & Liz-Marzan, L. Environmental applications of plasmon assisted Raman scattering. Energy Environ. Sci. 3, 1011–1017 (2010).
5. 5.
Xu, H., Bjerneld, E. J., K äll, M. & Börjesson, L. Spectroscopy of single hemoglobin molecules by surface enhanced Raman scattering. Phys. Rev. Lett. 83, 4357 (1999).
6. 6.
Camden, J. P. et al. Probing the structure of single-molecule surface-enhanced Raman scattering hot spots. J. Am. Chem. Soc. 130, 12616–12617 (2008).
7. 7.
Liang, H., Li, Z., Wang, W., Wu, Y. & Xu, H. Highly surface-roughened flower-like silver nanoparticles for extremely sensitive substrates of surface-enhanced Raman scattering. Adv. Mater. 21, 4614–4618 (2009).
8. 8.
Qian, X. et al. In vivo tumor targeting and spectroscopic detection with surface-enhanced Raman nanoparticle tags. Nat. Biotechnol. 26, 83 (2008).
9. 9.
Lim, D.-K., Jeon, K.-S., Kim, H. M., Nam, J.-M. & Suh, Y. D. Nanogap-engineerable Raman-active nanodumbbells for single-molecule detection. Nat. Mater. 9, 60 (2010).
10. 10.
Yang, M. et al. SERS-active gold lace nanoshells with built-in hotspots. Nano Lett. 10, 4013–4019 (2010).
11. 11.
Li, J. F. et al. Shell-isolated nanoparticle-enhanced Raman spectroscopy. Nature 464, 392 (2010).
12. 12.
Lim, D.-K. et al. Highly uniform and reproducible surface-enhanced Raman scattering from DNA-tailorable nanoparticles with 1-nm interior gap. Nat. Nanotechnol. 6, 452–460 (2011).
13. 13.
Kim, N. H., Lee, S. J. & Moskovits, M. Reversible tuning of sers hot spots with aptamers. Adv. Mater. 23, 4152–4156 (2011).
14. 14.
Pazos-Perez, N. et al. Organized plasmonic clusters with high coordination number and extraordinary enhancement in surface-enhanced Raman scattering (SERS). Angew. Chem. Int. Ed. 51, 12688–12693 (2012).
15. 15.
Lee, H. K. et al. Plasmonic liquid marbles: a miniature substrate-less SERS platform for quantitative and multiplex ultratrace molecular detection. Angew. Chem. 126, 5154–5158 (2014).
16. 16.
Li, P. et al. Evaporative self-assembly of gold nanorods into macroscopic 3d plasmonic super-lattice arrays. Adv. Mater. 28, 2511–2517 (2016).
17. 17.
Li, C.-Y. et al. Smart Ag nanostructures for plasmon-enhanced spectroscopies. J. Am. Chem. Soc. 137, 13784–13787 (2015).
18. 18.
Ma, C., Gao, Q., Hong, W., Fan, J. & Fang, J. Real-time probing nanopore-in-nanogap plasmonic coupling effect on silver supercrystals with surface-enhanced Raman spectroscopy. Adv. Funct. Mater. 27, 1603233 (2017).
19. 19.
Liu, H. et al. Three-dimensional and time-ordered surface-enhanced Raman scattering hotspot matrix. J. Am. Chem. Soc. 136, 5332–5341 (2014).
20. 20.
Caldarola, M. et al. Non-plasmonic nanoantennas for surface enhanced spectroscopies with ultra-low heat conversion. Nat. Commun. 6, 7915 (2015).
21. 21.
Liu, N., Tang, M. L., Hentschel, M., Giessen, H. & Alivisatos, A. P. Nanoantenna-enhanced gas sensing in a single tailored nanofocus. Nat. Mater. 10, 631–636 (2011).
22. 22.
Jung, K. et al. Hotspot-engineered 3D multipetal flower assemblies for surface-enhanced Raman spectroscopy. Adv. Mater. 26, 5924–5929 (2014).
23. 23.
Wang, P., Liang, O., Zhang, W., Schroeder, T. & Xie, Y.-H. Ultra-sensitive graphene- plasmonic hybrid platform for label-free detection. Adv. Mater. 25, 4918–4924 (2013).
24. 24.
Alvarez-Puebla, R. A. et al. Gold nanorods 3D-supercrystals as surface enhanced Raman scattering spectroscopy substrates for the rapid detection of scrambled prions. Proc. Natl Acad. Sci. 108, 8157–8161 (2011).
25. 25.
Chirumamilla, M. et al. 3D nanostar dimers with a sub-10-nm gap for single-/few-molecule surface-enhanced Raman scattering. Adv. Mater. 26, 2353–2358 (2014).
26. 26.
Sanz-Ortiz, M. N., Sentosun, K., Bals, S. & Liz-Marzán, L. M. Templated growth of surface enhanced Raman scattering-active branched gold nanoparticles within radial mesoporous silica shells. ACS Nano 9, 10489–10497 (2015).
27. 27.
Rodriguez-Lorenzo, L. et al. Zeptomol detection through controlled ultrasensitive surface-enhanced Raman scattering. J. Am. Chem. Soc. 131, 4616–4618 (2009).
28. 28.
Lee, S. et al. Utilizing 3d sers active volumes in aligned carbon nanotube scaffold substrates. Adv. Mater. 24, 5261–5266 (2012).
29. 29.
Oh, Y.-J. & Jeong, K.-H. Glass nanopillar arrays with nanogap-rich silver nanoislands for highly intense surface enhanced Raman scattering. Adv. Mater. 24, 2234–2237 (2012).
30. 30.
Tang, H. et al. Arrays of cone-shaped ZnO nanorods decorated with Ag nanoparticles as 3d surface-enhanced Raman scattering substrates for rapid detection of trace polychlorinated biphenyls. Adv. Funct. Mater. 22, 218–224 (2012).
31. 31.
Zhu, Z., Bai, B., You, O., Li, Q. & Fan, S. Fano resonance boosted cascaded optical field enhancement in a plasmonic nanoparticle-in-cavity nanoantenna array and its SERS application. Light Sci. Appl. 4, e296 (2015).
32. 32.
Li, X.-M. et al. 3d aluminum hybrid plasmonic nanostructures with large areas of dense hot spots and long-term stability. Adv. Funct. Mater. 27 1605703 (2017).
33. 33.
Sharma, B. et al. Aluminum film-over-nanosphere substrates for deep-uv surface-enhanced resonance Raman spectroscopy. Nano. Lett. 16, 7968–7973 (2016).
34. 34.
Lin, D. et al. Large-area Au-nanoparticle-functionalized Si nanorod arrays for spatially uniform surface-enhanced Raman spectroscopy. ACS Nano 11, 1478–1487 (2017).
35. 35.
Pendry, J., Fernandez-Domínguez, A., Luo, Y. & Zhao, R. Capturing photons with transformation optics. Nat. Phys. 9, 518 (2013).
36. 36.
Luo, Y., Pendry, J. & Aubry, A. Surface plasmons and singularities. Nano Lett. 10, 4186–4191 (2010).
37. 37.
Huang, J. et al. Harnessing structural darkness in the visible and infrared wavelengths for a new source of light. Nat. Nanotechonol. 11, 60–66 (2016).
38. 38.
Galinski, H. et al. Scalable, ultra-resistant structural colors based on network metamaterials. Light Sci. Appl. 6, e16233 (2017).
39. 39.
Tian, Y. et al. Enhanced solar-to-hydrogen generation with broadband epsilon-near-zero nanostructured photocatalysts. Adv. Mater. 29 1701165 (2017).
40. 40.
Han, M. et al. Controllable synthesis of two-dimensional metal nanoparticle arrays with oriented size and number density gradients. Adv. Mater. 19, 2979–2983 (2007).
41. 41.
Wei, W. et al. Fabrication of large-area arrays of vertically aligned gold nanorods. Nano Lett.18, 4467-4472 (2018).
42. 42.
Huang, J.-A. et al. Ordered Ag/Si nanowires array: wide-range surface-enhanced Raman spectroscopy for reproducible biomolecule detection. Nano Lett. 13, 5039–5045 (2013).
43. 43.
Kauranen, M. & Zayats, A. V. Nonlinear plasmonics. Nat. Photonics 6, 737–748 (2012).
44. 44.
Hess, O. et al. Active nanoplasmonic metamaterials. Nat. Mater. 11, 573 (2012).
45. 45.
Brongersma, M. L., Halas, N. J. & Nordlander, P. Plasmon-induced hot carrier science and technology. Nat. Nanotechnol. 10, 25–34 (2015).
46. 46.
Le, Ru,E., Blackie, E., Meyer, M. & Etchegoin, P. G. Surface enhanced Raman scattering enhancement factors: a comprehensive study. J. Phys. Chem. C. 111, 13794–13803 (2007).
47. 47.
Zhang, Y. et al. Coherent anti-stokes Raman scattering with single-molecule sensitivity using a plasmonic fano resonance. Nat. Commun. 5, 4424 (2014).
48. 48.
Dieringer, J. A., Lettan, R. B., Scheidt, K. A. & Van Duyne, R. P. A frequency domain existence proof of single-molecule surface-enhanced Raman spectroscopy. J. Am. Chem. Soc. 129, 16249–16256 (2007).
49. 49.
Etchegoin, P. G., Meyer, M., Blackie, E. & Le Ru, E. C. Statistics of single-molecule surface enhanced Raman scattering signals: Fluctuation analysis with multiple analyte techniques. Anal. Chem. 79, 8411–8415 (2007).
50. 50.
Patra, P. P., Chikkaraddy, R., Tripathi, R. P., Dasgupta, A. & Kumar, G. P. Plasmofluidic single-molecule surface-enhanced Raman scattering from dynamic assembly of plasmonic nanoparticles. Nat. Commun. 5, 4357 (2014).
51. 51.
Patra, P. P. & Kumar, G. P. Single-molecule surface-enhanced Raman scattering sensitivity of Ag-core Au-shell nanoparticles: revealed by bi-analyte method. J. Phys. Chem. Lett. 4, 1167–1171 (2013).
52. 52.
Zhang, H. et al. Revealing the role of interfacial properties on catalytic behaviors by in situ surface-enhanced Raman spectroscopy. J. Am. Chem. Soc. 139, 10339–10346 (2017).
53. 53.
Zhang, H. et al. In situ dynamic tracking of heterogeneous nanocatalytic processes by shell-isolated nanoparticle-enhanced Raman spectroscopy. Nat. Commun. 8, 15447 (2017).
54. 54.
Haynes, W. M. CRC Handbook of Chemistry and Physics, 95th edition (CRC Press, Boca Raton, 2014).
## Acknowledgements
The work is supported by H2020 European Research Council Project Nos. 734578 (D-SPA) and 648783 (TOPOLOGICAL), Leverhulme Trust (Grant no. RPG-2012-674), the Royal Society, the Wolfson Foundation, the Engineering and Physical Sciences Research Council (EP/J018473/1), National Natural Science Foundation of China (Grant no. 11604161), the Jiangsu Provincial Natural Science Foundation (Grant no. BK20160914), Natural Science Foundation of the Jiangsu Higher Education Institutions of China (Grant no. 16KJB140009),Natural Science Foundation of Nanjing University of Posts and Telecommunications (Grant no. NY216012), the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant (Grant no. 752102), and the funding from KAUST (Award OSR-2016-CRG5-2995) .
## Author information
Authors
### Contributions
P.M and C.L. contributed equally to this paper. P.M. and C.L. conceived and designed the project. G.F. and A.F performed the calculations of transformation optics. C.L. and P.M. performed the numerical simulations. P.M. performed the device fabrication and measurement. C.L. did the statistical data analysis. A.F., S.Z, M.H. supervised the project. All the authors discussed the results and wrote the manuscript.
### Corresponding authors
Correspondence to Qiang Chen or Andrea Fratalocchi or Shuang Zhang.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Journal peer review information: Nature Communications thanks the anonymous reviewers for their contributions to the peer review of this work.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Mao, P., Liu, C., Favraud, G. et al. Broadband single molecule SERS detection designed by warped optical spaces. Nat Commun 9, 5428 (2018). https://doi.org/10.1038/s41467-018-07869-5
• Accepted:
• Published:
• ### Nanostructured InGaN Quantum Wells as a Surface-Enhanced Raman Scattering Substrate with Expanded Hot Spots
• Fan-Ching Chien
• , Ting Fu Zhang
• , Chi Chen
• , Thi Anh Nguyet Nguyen
• , Song-Yu Wang
• , Syuan Miao Lai
• , Chia-Hua Lin
• , Chun-Kai Huang
• , Cheng-Yi Liu
• & Kun-Yu Lai
ACS Applied Nano Materials (2021)
• ### Dielectric Metasurfaces Enabling Advanced Optical Biosensors
• Ming Lun Tseng
• , Yasaman Jahani
• , Aleksandrs Leitis
• & Hatice Altug
ACS Photonics (2021)
• ### Ultrasensitive plasmon enhanced Raman scattering detection of nucleolin using nanochannels of 3D hybrid plasmonic metamaterial
• Cai-Feng Shi
• , Zhong-Qiu Li
• , Chen Wang
• , Jian Li
• & Xing-Hua Xia
Biosensors and Bioelectronics (2021)
• ### Surface-enhanced resonance Raman scattering of MoS2 quantum dots by coating Ag@MQDs on silver electrode with nanoscale roughness
• Yan Lin
• , Jie Li
• , Peijie Wang
• & Yan Fang
Journal of Luminescence (2021)
• ### Recent advances in aptasensors for mycotoxin detection: On the surface and in the colloid
• Nan Zhang
• , Boshi Liu
• , Xialian Cui
• , Yutong Li
• , Jing Tang
• , Haixia Wang
• , Di Zhang
• & Zheng Li
Talanta (2021) |
# Once a Week (magazine)/Series 1/Volume 3/Auroras
(1860)
Auroras
by Francis Morton
AURORAS.
In primitive ages mystery alarmed. Knowledge of his insignificance amid the vastness of the universe inclined man to regard with superstitious awe the invisible but all-pervading forces of which he was vaguely conscious. Attributing to nature sympathy with his fortunes, he conceived that all phenomena had a direct relation to himself—that a mysterious connection existed between the events of his ephemeral life and the cyclical movements of the stars, and he uneasily sought in the complex changes of the heavens for indications of the future that might determine his faltering steps. From its weird and fantastic character the polar aurora is peculiarly adapted to elicit these emotions; and, as that seen by imagination is but the shadow of the actual projected into infinite space; so, in those ages of blood and havoc, the auroral coruscations, shaped by fearful fancy into aerial hosts contending with glittering arms, were conceived to portend proximate slaughters, which, from the spirit of the age, no sign from heaven was needed to presage. These phenomena no longer alarm us; and yet, beyond ingenious conjecture, modern science has made indifferent use of the materials accumulated by observation towards determining the real meaning or origin of auroras. Under these circumstances, the attention of the public having been attracted to late displays of singular brilliancy, and to the remarkable influence these have exercised upon telegraphic lines, some remarks may be acceptable on a subject scientifically so interesting, and, as affecting the chief means of international communication, so important to the welfare of our race.
After briefly reviewing the auroral phenomena, we propose inquiring what special conditions of the earth, atmosphere, or cosmos, ascertained to coincide with their occurrence, may be conceived to have a positive relation therewith either as cause or effect. Certes, coincidences do not in themselves constitute proofs of connection; but, when constantly recurrent, they justify a presumption to that effect, are fairly entitled to a valuation, and may possibly guide our efforts to discover the law they intelligibly suggest.
That which is specially perplexing in the aurora is the irregularity of its appearance. From earliest antiquity down to the present time it has been seen at unequal intervals, yet no period has been assigned to it, nor has anything been determined as to its law. The unknown writer of the book of Job speaks of the “Brightness that cometh out of the North.” Aristotle has recorded the phenomenon, and various other classical writers incidentally allude to it; but that it was then rare may be presumed both from the awe it inspired, and from the very position of that region whereto early science was restricted. To come to later times: in Sweden and the north of Europe it was also rare previous to the eighteenth century, and there seem to have been long intervals without any auroral appearances in England, though a lack of meteorological observations does not absolutely prove the absence of phenomena in an age indifferent to science, and inclined to prefer the comfort of repose to learned vigils. Of later years auroras have been remarkably vivid and frequent, even in places hitherto unvisited by them, for the great aurora of 1859 was the first ever observed in Jamaica since the discovery of that island.
It may be stated generally that auroras increase in frequency with proximity to the poles, but they are seen alike in the frosty winter of polar regions, and the autumn of more genial climes, the atmospheric serenity of those seasons being specially favourable to their visibility, and perchance to their occurrence. An aurora commences after sunset, rarely later than midnight, its duration varying from a few hours to several successive nights, while so manifold are its aspects and so rapid its transitions that they can scarcely be comprehended in a general description necessarily terse.
An aurora is always preceded by the appearance on the horizon of a brown haze, passing into violet, through which the stars may be dimly seen, which is diffused laterally and upward to a height of from 5° to 10°, at which it is bounded by a luminous arc. This is occasionally agitated for hours by a tremulous movement and seeming effervescence ere rays of light rush from it upward into the zenith, glowing with the prismatic colours between violet and purple red, whose rapid undulatory motion causes a continuous change in their form and splendour. Sometimes these columns of light are mingled with dark rays, somewhat like Fraunhofer’s lines in the solar spectrum; at others the whole heaven is radiant with coruscations, whose brilliancy seems intensified by the rapidity of their emission, though it is ever greatest at the arc in which they originate. When these streams of glory, rising simultaneously from various points, unite in the zenith, they form a brilliant crown of light; but this is rare, and always premonitory of the end of the aurora, which then rapidly pales and vanishes, leaving as records of its presence only a faint haze on the horizon, and a few nebulous spots arranged in streaks upon the sky. A faint sulphurous odour is at times apprehended, similar to that attendant on a thunderstorm, and a sharp crepitation has been heard, regarding which the incredulity of some in opposition to reliable testimonies is not very philosophical. Burns, who was a good observer of nature, alluded to it, and his evidence is not to be despised:
The cauld blue North was flashing forth
Her lights wi’ hissing eerie din—”
Signs of positive electricity have also been frequently observed in the atmosphere at these times.
It has been observed that auroras are most vivid and frequent when the higher atmosphere contains those delicate flowing clouds, termed cirri. These have a singular tendency to Polar arrangement, like that of the auroral rays, and occasionally a train of cirri thus disposed have been identified as having been luminous rays the preceding night,—the vehicle of an evanescent splendour.
The condition of the atmosphere, indicated by cirri, is attended with magnetic disturbances. This having been stated, the coincidence of cirri with auroras gives a special significance to their meridional direction and evolution of light at the Poles, did those facts stand alone. But of all phenomena accompanying the aurora those most invariable are magnetic ones. The needle is deflected by it first west, then east. This is noticed even in distant places where the aurora is not visible, proving that the action is not merely local; and so invariable are these magnetic disturbances, that the celebrated Arago was thereby enabled to detect the presence of an aurora from the subterranean chambers of the Paris observatory.
But the most remarkable evidence of the immediate presence of the aurora is its influence on telegraphic lines, consisting not merely in a momentary interruption of communication like that occurring during a thunder-storm, but in the magnetic action on the magnets and actual occupation of the wires. These strange phenomena vary with the intensity of the aurora, but they have been satisfactorily determined by repeated observation, all telegraphic operations being sometimes stopped for hours.
To apprehend clearly the nature of this auroral action on the wires as distinguished from that of a thunderstorm, it must be premised that the voltaic or chemical electricity used for telegraphic purposes is of low tension, continuous flow, and perfectly controllable; whereas the free electricity of the atmosphere is of high tension, exploding with vivid light when it finds a conductor, and “dying in the very moment of its birth.” During a vivid aurora a new mode of electricity, of totally distinct character from either of these, is revealed: it has low tension, chemical decomposing power, alternating polarity, induces magnetism, and produces on the electro-magnets of a line the same effect as that of continuously opening and closing the circuit. An instance of this specific action may be adduced.
In 1852, when auroras were very brilliant throughout North America, the auroral current manifested itself unmistakeably on many of the telegraphic lines. The main wire of one particular line, to which we have reference, was connected with a chemically prepared paper on a disk, and on this the ordinary atmospheric currents were actually self-registered. The usual voltaic current—decomposing the salts of the paper and uniting with the iron point of the pen—left a blue mark varying with the intensity of its action. On this occasion, the batteries being at the time detached, a dark blue line appeared on the moistened paper, and was succeeded by an intense flame which burnt through twelve thicknesses. This current then gradually died away, and was followed by a negative one which bleached and changed similarly into flame. The force which had thus intervened on the wires continued to act as long as the aurora lasted, and effectually put a stop to business.
Extraordinary as it may seem, the auroral current—the presence of which has been thus made visible—has been actually used for the transmission of human thought very recently.
The brilliant auroras of last autumn, which excited the admiration of England, while interrupting its means of communication, were not merely local, but prevailed simultaneously all over Europe, Northern Africa, Northern America, the West Indies, and Australia, satisfactorily establishing the unity of the action. This magnetic or auroral storm had rendered all the telegraphic lines of Canada and the Northern States unavailable, except at irregular intervals, for several days.
On the 2nd of September, the auroral influence being very active in the Boston terminus of the Boston and Portland line, the proper voltaic current being alternately intensified and neutralised by it visibly, it occurred to the interested operator in the office, that if the batteries were detached from the line, and the wires connected with the earth, the intruding auroral current might, perchance, be made use of. The idea is characteristically American in its utilitarianism. Having communicated this design to his Portland correspondent, the conception was immediately acted on, with fortunate success, and despatches were transmitted for two hours in that manner more effectually than could then have been done with the customary batteries.
A like extraordinary application was made of the auroral current, on the same day, on the Fall River and South Braintree Line.
To a correct apprehension of this strange occurrence it is necessary to remember that the direction of the poles of the several batteries on a line is immaterial, provided it be uniform, otherwise the currents would neutralise each other. When the aurora supervenes on a line, following in successive and differently polarised waves, the ordinary voltaic current is alternately neutralised and intensified beyond control. In the above cases—the batteries having been detached—the abnormal positive current would not increase, or the negative one decrease, the availability of the wires. The waves were observed to endure about fifteen seconds, intensifying with the time, to be succeeded by one of the reverse polarity. The singular phenomena indicating disturbances of the equilibrium of the earth’s magnetic forces have been collectively classed by Humboldt as magnetic storms. They are marked, as we have seen, by cirrous disposition of the clouds, perturbations of the needle, obsession of telegraph-wires, and the aurora. The evolution of light in the latter invariably terminates the movement, as in a thunderstorm lightning re-establishes the equilibrium of the atmospheric electric forces.
After these illustrations of the phenomena attendant on the aurora, some attention may be directed to an inquiry into its causes.
Whatever may be its origin, that the auroral action takes place within the limits of the atmosphere, scarcely higher than the region of cirri, and that it participates” in the movement of the earth, appears from the fact that the diurnal rotation, at the rate of a thousand miles an hour, effects no perceptible change in its aspect. Its absolute height has been variously estimated: by Euler at thousands of miles, by others as within the cloud region. It has been erroneously conceived that the height might be determined by observation of the corona, which is only an effect of perspective, owing to the convergence of parallel rays; each individual seeing his own aurora, as his own rainbow, from his particular point of view. As the centre of the arc is always in the magnetic meridian, simultaneous observation from two stations on the same meridian, with an interval sufficient to constitute a reliable base, might however effect the desired object.
The accepted theory with scientific men is, that the aurora is an electrical phenomenon occurring in the atmosphere, consisting in the production of a luminous ring with divergent rays, having for its centre the magnetic pole, and its production is supposed to be thus accounted for. The atmosphere and the earth are in opposed electric conditions, the neutralisation of which is effected through the moisture wherewith the lower air is charged. In the Polar regions, whereto the great tropical currents are constantly bearing aqueous vapour, which the cold condenses in the form of haze, this catalysis would most frequently occur. When the positively electric vapour is brought into contact with the negatively electric earth, equilibrium would be effected by a discharge, accompanied in certain states of the atmosphere by the auroral light. This is assumed to be contingent on the presence in the atmosphere of minute icy particles, constituting a haze, which becomes luminous by the electric discharge. Aeronauts have found the atmosphere at great heights, while serene and cloudless, to be pervaded by this transparent haze of which cirri are conceived to consist.
In confirmation of this hypothesis, it has been experimentally shown that when the union of the two electricities is effected in rarified air near the pole of a magnet, a luminous ring is produced which has a rotary motion according to the direction of the discharge. Thus then, when electrical discharges occur in the polar regions between the positive electricity of the atmosphere and the negative electricity of the earth, the magnetic poles of the earth would exercise a similar influence on the icy haze which is conceived essential to the evolution of the auroral light. Thus the arc seen by the observer would be that portion of the luminous ring above his horizon, varying with the distance from the pole. Only when it reaches his zenith could he be in immediate contact with the auroral haze, and then only would the asserted crepitation become audible, which is assumed to be identical in nature with that produced by an electrical machine. The sulphurous odour would be due to the generation of ozone from the oxygen of the air.
Now, though this theory would intelligibly explain the mode of phenomenal manifestation, it may reasonably be objected that, in hypothetising a continuous electric action in the atmosphere, it does not sufficiently account for the ascertained periodicity of auroras by assuming that their visibility and the variation in their intensity are consequent on the condition of the atmosphere. It is discreetly silent as to the mode of induction of this special atmospheric condition; and therefore—assuming their invariable coincidence and connection—as to the efficient cause of auroras.
We humbly conceive that the cause must be sought beyond the atmosphere in the fluctuations of that great solar force, to which is primarily attributable the induction of telluric magnetism, and which must enter as a prime motive in all atmospheric phenomena.
The irregularities of solar action have an intelligible exponent in the phenomenal changes observable on the disk of the sun. Its spots are subject to remarkable variations in form and size, contracting or dilating in unison with the variable vivacity of its constitutional force, and the period of these variations—secular, annual, and diurnal—have been approximately determined.
The direct relation between these oscillations of the solar atmosphere and the intensity or direction of the magnetic forces, as indicated by the needle, long inferred, are now satisfactorily established. From late observations made at Christiania, in Norway, by Hansteen, it has been ascertained that the maximum of magnetic intensity corresponds with the minimum of inclination; and that for both the period of oscillation is 11${\displaystyle {\frac {1}{9}}}$ days, which is precisely the shorter period assigned by Wolff to the solar spots.
To express these results in less technical language, when the luminous atmosphere of the sun is more equally diffused, indicating the highest energy of that constitutive force pervading, vitalising, and perchance evolving it; then, through the tremulous medium of the intervening ether, the earth thrills responsively with intenser life. This epoch of exceptional magnetic intensity is that specially signalised by auroras, more or less vivid, by atmospheric perturbations, and occasionally by volcanic convulsions.
The remarkable auroras of last autumn have been succeeded by anomalous and unkindly seasons, ominous of coming sorrow, which, if not within the power of man to prevent, he might have been prepared to alleviate, or courageously endure, had he been better able or more willing to “discern the face of the sky,” if not from love of abstract science, from the lower consideration of his material comfort.
Whatever the wilful ignorance of man, since he is rarely entirely deprived of divine guidance, or unillumined by transient gleams of light—obscured and diffracted though it be by the medium through which it is transmitted—might not the vague alarm of antiquity represent a dim and confused apprehension that auroras were symbols of the variable activity of a central force, with the fluctuations in which the condition of the earth, as the abode of human life, was connected?
Francis Morton. |
We upgraded Indico to version 3.0. The new search is now available as well.
# 26th International Conference on Supersymmetry and Unification of Fundamental Interactions (SUSY2018)
23-27 July 2018
Barcelona
Europe/Zurich timezone
## Darkside latest results and the future liquid argon dark matter program
24 Jul 2018, 14:50
20m
Room C
Talk (closed)
### Speaker
Roberto Santorelli (Centro de Investigaciones Energéti cas Medioambientales y Tecno)
### Description
DarkSide uses a dual-phase Liquid Argon Time Projection Chamber to search for WIMP dark matter. The talk will present the latest result on the search for low mass ($M_{WIMP} <20GeV/c^2$ ) and high mass ($M_{WIMP}>100GeV/c^2$) WIMPs from the current experiment, DarkSide-50, running since mid 2015 a 50-kg-active-mass TPC, filled with argon from an underground source. The next stage of the Darkside program will be a new generation experiment involving a global collaboration from all the current Argon based experiments.
DarkSide-20k, is designed as a >20-tonne fiducial mass TPC with SiPM based photosensors, expected
to achieve an instrumental background well below that from coherent scattering of solar and atmospheric neutrinos. Like its predecessor DarkSide-20k will be housed at the Gran Sasso (LNGS) underground laboratory, and it is expected to attain a WIMP-nucleon cross section exclusion sensitivity of $10^{-47}\, cm^2$ for a WIMP mass of $1 TeV/c^2$ in a 5 yr run.
Parallel Session Dark Matter, Astroparticle Physics
### Primary author
Roberto Santorelli (Centro de Investigaciones Energéti cas Medioambientales y Tecno) |
### Jon Gold
This post is 3 years old — please take it with a pinch of salt!
# Declarative Design Tools
Design is a process of divergence and convergence.
We receive a project brief, a set of constraints, and set about exploring all the ways to satisfy them. Through the course of a project we diverge, we branch, we come up with options, and then we converge by using the sum of our team’s experience as designers, stakeholders, and humans to pick the optimal solution.
This process repeats in increasing levels of fidelity until we are done and the project is finished. We diverge and converge in yellow sticky notes on the wall, in wireframes on whiteboards and in mockup tools, through to high-fidelity final assets: we diverge and converge on a brand identity, a chair, an icon, a house, a website, or a piece of software.
Design is a process of divergence and convergence. Constrain your divergent thought too soon and you’re resigned to end up with repetitive solutions and a lack of creativity and depth.
### Our bicycles
Computers are tools for making us better than ourselves. The Steve Jobs quote about computers being ‘bicycles for the mind’ is so common as to lose meaning nowadays, but it’s a fantastic observation. Computers are extensions of ourselves; tools that let us work better, faster and smarter. Humans aren’t likely to spawn wings, dorsal fins or gain the powers of invisibility anytime soon, but in place of biological evolution, we have something more enticing: evolving ourselves through our machines. We can decide how we want our mental capabilities to evolve and steward that change ourselves. In this information age, our text editors are the means of production and the means of our transformation.
Our design tools haven’t really changed since the introduction of the Mac, though. Over the last 30 years, we’ve had flings with Quark and Quartz Composer, Corel Draw and Creative Suite, Sketch and SketchUp, Photoshop and Freehand and Fireworks and Framer and Figma, but they all operate on a fundamentally similar principle.
Canvas, document, direct property manipulation. Newer tools are skirmishing over code or nodes rather than drag-and-drop designing, but they’re working on the same level. We have a one-to-one mapping to our tools; we edit one font size at a time, one color at a time, one weight at a time, one border radius at a time.
This is a marvelous feat! We have a direct mapping between design in our heads and design on screen. Given the requisite technical skills, we can bargain with our computers to produce our wildest creative visions. Rather than speccing out an idea for a poster and waiting days or weeks for a printer—a person, not temperamental inkjet contraption—to realize it (like our Swiss heroes had to), we have a near-instant feedback loop. We think and we artwork: a one-to-one mapping. Let that sink in for a second; we’ll come back to it later.
### Tales from the design studio
Grab a fistful of twenty-sided dice; it’s time for some roleplaying! We’re playing a game of Designers & Deadlines; a captivating way for people who enjoy arguing about long shadows and skeuomorphs to spend an afternoon.
DUNGEON MASTER: Our heroes are a Level 99 Hovering Art Director and a Level 2 Junior Designer. The setting: a design studio. The walls appear to be made of exposed brick; clearly geographically placing us in Shoreditch or Williamsburg or The Mission or Kreuzberg or Södermalm or…
HOVERING ART DIRECTOR: This could be anywhere.
DM: Fair point. In front of you there are Thunderbolt displays; behind you people are loudly and inconsiderately playing ping pong whilst you’re trying to have a design review. Someone in the background is tweeting about how much they love Draplin or feigning incredulity at Designer News or something. To your left, there is a stack of molesk…
JUNIOR DESIGNER: This joke is wearing thin; we get it.
DM: Then let’s begin!
HAD: Thanks for taking the time to present those mockups of the article page for the website we’re designing. I thought I’d jump in with a couple of quick suggestions and help you tighten up the typography. Let’s start with this headline. I’d love to see you try out these 4 other fonts. I see you’ve set it in 26pt type; can you please try 22, 24, 28 or 32pt instead? And that would be great to feel it out in regular, demi, bold and black weights of each. Ooh, and with a margin of 1, 2 or 4 of our vertical rhythm’s units.
JD: 4 typefaces, 4 sizes, 4 weights… and 3 positions? That’s 192 variations!
HAD: Indeed it is. Now on to that body copy. How about we see it in these 6 fonts; in light and regular; in 15, 16 and 17pt, and in 1.4, 1.5 or 1.6x line-height for good measure?
JD: 192 ✕ 6 ✕ 2 ✕ 3 ✕ 3…off the top of my head that’s 20,736 variations!
HAD: Wonderful. Artwork up each of those, spray-mount them to foam board and let’s get them on the wall for the next round of reviews in an hour.
JD: 😪
DM: I’m so sorry.
Fantastic. Time to get to the digital drawing board.
### Our brains and our eyes
Using our experience and intuition, we formulate options in our head, and then rationalize them in our design tool du jour. We come up with hypotheses of things that might look good, and then run experiments to see what actually works.
Our laboratories are software and rather than messing with hydrocarbons and making explosions, our experiments revolve around copying, pasting, duplicating artboards, changing a detail, trying to remember what we were doing, or getting bored or distracted.
We can’t skip this stage; we need to see our hypotheses on a screen or printed. No designer, no matter how talented or experienced, can design entirely in their head.
J. C. R. Licklider’s Man-Computer Symbiosis describes our current workflows with remarkable prescience:
About 85% of my “thinking” time was spent getting into a position to think, to make a decision, to learn something I needed to know. Much more time went into finding or obtaining information than into digesting it. Hours went into the plotting of graphs, and other hours into instructing an assistant how to plot. When the graphs were finished, the relations were obvious at once, but the plotting had to be done in order to make them so. At one point, it was necessary to compare six functions relating speech-intelligibility to speech-to-noise ratio. Several hours of calculating were required to get the data into comparable form. When they were in comparable form, it took only a few seconds to determine what I needed to know.
Lick was writing in 1960 (56 years ago), in the early years of digital computing. The first experiments into GUIs wouldn’t happen at Xerox PARC for over a decade; the release of the Apple ][ (and with it the widespread adoption of personal computers by everyday knowledge workers) was 17 years away.
The first video we have of Steve Jobs hinting at the ‘bicycles for the mind’ analogy was in 1980 (the more famous clip is another decade older).
I’m dwelling on the dates because they’re important. Though Lick might not have been fully satisfied by our current tools, the past 56 years have blessed us with a wealth of staggering innovation following his lead. Considering the problems he was writing about—laborious paper plotting of the results of scientific experiments—our spreadsheets and MATLAB simulations surely seem like wizardry.
And yet… our design process is still limited by our meatspace interactions bridging between our brains and devices that have unbounded computational power. Our brains and computers are fast; our hands, mice and keyboards are slow.
Designing 20,000 variations of a component is a low figure too - that’s only a few fonts and a few sizes. Think of all the possible permutations for every element in the last real project you designed!
Of course, there aren’t enough flat whites in the world to keep anyone focused enough to really design 20,000 variations of a component, so we compromise. We skimp. We fall short of fully exploring the combinatorial space because our design tools haven’t really changed since 1984.
Lick continues, and the parallels sound hauntingly familiar:
My “thinking” time was devoted mainly to activities that were essentially clerical or mechanical: searching, calculating, plotting, transforming, determining the logical or dynamic consequences of a set of assumptions or hypotheses, preparing the way for a decision or an insight. Moreover, my choices of what to attempt and what not to attempt were determined to an embarrassingly great extent by considerations of clerical feasibility, not intellectual capability.
To me, the design process is just divergence and convergence; thinking and seeing. Brain and eyes. The middle bits; the exponential time increase between thinking and seeing—that’s not design, it’s repetitive manual labor. Some people see that as design, but really it’s getting in a position to think; getting in a position to design. Design is a game for our brains and our eyes.
If we’re the type of designer who likes to write computer code, we might start to draw analogies. There is a spectrum in programming paradigms: from imperative to declarative.
Imperative programming is telling the computer how to calculate something. We give it step by step instructions—procedures—to get to the answer.
Let’s demonstrate this in JavaScript. Imagine we have a list of numbers, and want to square each of them, then add up the total.
// imperative
function sumOfSquares(nums) {
var i,
sum = 0,
squares = []
// make an array of the squares of each input
for (i = 0; i < nums.length; i++) {
squares.push(nums[i] * nums[i])
}
// loop through the list
for (i = 0; i < squares.length; i++) {
sum += squares[i]
}
return sum
}
sumOfSquares([1, 2, 3, 4, 5]) // 55
Declarative programming, by contrast, focusses on what we want to calculate - we don’t concern ourselves with the details.
// declarative
const square = a => a * a
const add = (a, b) => a + b
// these functions work on their own
square(2) // 4
map(square, [1, 2, 3]) // [1, 4, 9]
sum([1, 2, 3]) // 6
// and they work together
const sumOfSquares = pipe(
map(square),
sum
)
sumOfSquares([1, 2, 3, 4, 5]) // 55
The declarative example is neat, composable and super readable; the imperative one messy and full of manual array shuffling.
This might seem tangential to the example of our poor junior designer copying & pasting artboards until RSI sets in, but it’s an important headspace to be in whilst considering our design tooling. With our current tools we’re telling the computer how to design the vision we have in our head (by tapping on our input devices for every element on the screen); in our future tools we will tell our computers what we want to see, and let them figure out how to move elements around to get there.
### A smattering of set theory
The cool thing is this that switching our design tools from imperative to declarative isn’t a difficult problem to solve: computers have all sorts of fancy and esoteric concepts like variables, loops, and arrays which we can use to build tools that fix our design process.
Time for a quick math lesson. Here are some sets:
$Families = \{Akkurat, Gotham, Tiempos\}$ $Sizes = \{20, 30, 40\}$
We can multiply them together like this:
$Families \times Sizes = \{(Akkurat, 20), (Akkurat, 30), (Akkurat, 40), (Gotham, 20), (Gotham, 30) …\}$
That multiplication is called the Cartesian product: all the possible combinations of the items in each set. We can define it in fancy math symbols so people on the internet think we’re smart:
$A \times B = \{(a, b) | a \in A \land b \in B\}$
That’s the binary Cartesian product: the product of two sets. We can also derive the n-ary Cartesian product; all of the combinations of hella sets (always remember that n- just means ‘hella-’).
In “I failed math in high school but I’m trying hard to sound clever” symbols, that looks like this:
$\prod\_{i=1}^n S_i=S_1 \times S_2 \times … \times S_n=\{(s_1,s_2,…,s_n) \vert s_1 \in S_1 \land s_2 \in S_2 \land … \land s_n \in S_n \}$
Squinting a little, this is what we’re doing (by hand) when we try to artwork all the possibilities of a design problem. We have to copy and paste the artboards ourselves, but we’re fundamentally calculating the product of a bunch of properties. Here’s an example in colors and shapes:
Now that we have the language to describe why our current tools are slow (they’re imperative! we have to move our meat-hands and tell the computer how to do our work for us!) and what we would like them to be (powerful aids to explore the full permutational space of our designs!), we can formulate alternatives.
A whole world of Declarative Design Tools is waiting to be built; here’s mine:
René is the tool I’ve been working on since leaving The Grid in February. It’s a design tool like those you’ve used before: it has artboards and layers, fonts, colors, border radiuses; all the bits & pieces you might expect to see:
It just has slightly different input fields. They’re input fields that can take a range of options, allowing you to tell the computer all of the variations of a component you’d like to see. René is a Declarative, Permutational Design Tool. This is how it works:
Sweet! All the possible combinations of our design. We can add to a list of options we want to try out and see how they work out. It’s unbelievably quick. Once we’ve seen a few options we can zoom back in on one or two—this is convergence—and start again. We branch out to visualize lots of different options, then zoom back in on the ones that look good. Divergence and convergence.
Here’s another example that demonstrates button variations. We tell René what we want: different fonts, letter-spacings, border-radiuses, and then it shows us all of our options.
### Brute-force design
Whilst this multiplicative design gives us a huge benefit, we don’t want to get carried away with it; our ideal solution sits somewhere between our old school manual design, and brute forcing a solution. Indeed, it would be easy to just produce every possible permutation of design possible. Every combination of every typeface, size, color & position around. The problem is that design is a permutationally unbounded space, with a finite range of acceptable options. We quickly end up with more outcomes than a human can possibly comprehend, but still have to filter them down by hand. Overwhelmed with choice we disconnect; the tool loses meaning.
Without the aesthetic, the computer is but a mindless speed machine, producing effects without substance.
— Paul Rand
In my experiments, the ideal range of permutations to explore at each stage is around 4-64. This is a mix of cognition and screen real estate - we can’t comprehend many more variations anyway, but we also run out of room to display them on. Plugging in two 27” monitors helps, but I’m more excited by the possibility of AR and VR design tools that let permutations escape the confines of our screens and roam free across the walls in our offices.
The entire workflow in René might surface many thousands of permutations, but they’re experienced through a recurrent process of diverging & converging.
You can play with René right now. It’s still a prototype, but I couldn’t really finish this post without a demo, could I? Some things to note:
• Right now, you can only edit one of two files: an article preview, or a button. I’ve used René to design full websites, but I’m limiting the scope of the demo right now. If you read Taking The Robots To Design School you might be able to piece together why ;)
• The font list is loaded with Google Fonts; you can also use any local fonts by typing them in.
• It’s built with React, Redux, Ramda and Flow - all the modern favorites.
Have fun, and let me know what you think on Twitter!
✌️👽 |
Finding the root of 75
• Aug 15th 2012, 11:54 PM
ariel32
Finding the root of 75
I have been told that in order to find the root of 75 you need to factor it
and then do 5 times root of 3 or something like that
anyway why cant i just break 75 into 2 times 35 and so on?
in any case in simple terms can you explain it?
• Aug 16th 2012, 01:46 AM
earboth
Re: Finding the root of 75
Quote:
Originally Posted by ariel32
I have been told that in order to find the root of 75 you need to factor it
and then do 5 times root of 3 or something like that
anyway why cant i just break 75 into 2 times 35 and so on?
in any case in simple terms can you explain it?
1. You can easily calculate the square-root of a number if this number is a square: $\sqrt{9} = 3~or~\sqrt{\frac4{25}}=\frac25$
2. If the number is not a square (75 is not a square of a rational number) you can express $\sqrt{75}$ as an approximation in decimal form. You have then a result with a lot of digits - and it is still not accurat!
Therefore you try to split a number into a product of a square (the square should be as large as possible) and another rational number. So you are able to calculate the square-root at least from the square. The other number remains under the root sign.
Example:
$\sqrt{128} = \sqrt{4 \cdot 32} = \sqrt{16 \cdot 8} = \sqrt{64 \cdot 2}$
Since 64 is the greatest square you'll get $\sqrt{128} = 8 \cdot \sqrt{2}$
• Aug 16th 2012, 05:34 PM
hacker804
Re: Finding the root of 75
$\sqrt{75}=\sqrt{3\cdot5\cdot 5}$
$\sqrt{3\cdot5^{2}}$
$5\sqrt{3}$
• Aug 17th 2012, 11:41 PM
kalwin
Re: Finding the root of 75
Rewrite each as a multiple of a square:
√(25 * 3)
√25 * √3
5√3 |
# signalilo
signalilo is a Commodore component to manage signalilo.
See the parameters reference for further details. |
• A
• A
• A
• ABC
• ABC
• ABC
• А
• А
• А
• А
• А
Regular version of the site
## Combinatorial formulas for cohomology of knot spaces
Moscow Mathematical Journal. 2001. Vol. 1. No. 1. P. 91-123.
We develop homological techniques for finding explicit combinatorial expressions of finite-type cohomology classes of spaces of knots in Rn,n≥3, generalizing Polyak--Viro formulas for invariants (i.e. 0-dimensional cohomology classes) of knots in R3. As the first applications we give such formulas for the (reduced mod 2) {\em generalized Teiblum--Turchin cocycle} of order 3 (which is the simplest cohomology class of {\em long knots} R1↪Rn not reducible to knot invariants or their natural stabilizations), and for all integral cohomology classes of orders 1 and 2 of spaces of {\em compact knots} S1↪Rn. As a corollary, we prove the nontriviality of all these cohomology classes in spaces of knots in R3. |
# Given an array, find all its elements that can become a leader
I was successful in solving a challenge in codility, but my solution failed performance tests. How can I improve my solution?
Challenge:
Integers K, M and a non-empty array A consisting of N integers, not bigger than M, are given.
The leader of the array is a value that occurs in more than half of the elements of the array, and the segment of the array is a sequence of consecutive elements of the array.
You can modify A by choosing exactly one segment of length K and increasing by 1 every element within that segment.
The goal is to find all of the numbers that may become a leader after performing exactly one array modification as described above.
Write a function:
def solution(K, M, A)
that, given integers K and M and an array A consisting of N integers, returns an array of all numbers that can become a leader, after increasing by 1 every element of exactly one segment of A of length K. The returned array should be sorted in ascending order, and if there is no number that can become a leader, you should return an empty array. Moreover, if there are multiple ways of choosing a segment to turn some number into a leader, then this particular number should appear in an output array only once.
For example, given integers K = 3, M = 5 and the following array A:
A[0] = 2
A[1] = 1
A[2] = 3
A[3] = 1
A[4] = 2
A[5] = 2
A[6] = 3
the function should return [2, 3]. If we choose segment A[1], A[2], A[3] then we get the following array A:
A[0] = 2
A[1] = 2
A[2] = 4
A[3] = 2
A[4] = 2
A[5] = 2
A[6] = 3
and 2 is the leader of this array. If we choose A[3], A[4], A[5] then A will appear as follows:
A[0] = 2
A[1] = 1
A[2] = 3
A[3] = 2
A[4] = 3
A[5] = 3
A[6] = 3
and 3 will be the leader.
And, for example, given integers K = 4, M = 2 and the following array:
A[0] = 1
A[1] = 2
A[2] = 2
A[3] = 1
A[4] = 2
the function should return [2, 3], because choosing a segment A[0], A[1], A[2], A[3] and A[1], A[2], A[3], A[4] turns 2 and 3 into the leaders, respectively.
Write an efficient algorithm for the following assumptions:
• N and M are integers within the range [1..100,000];
• K is an integer within the range [1..N];
• Each element of array A is an integer within the range [1..M].
My Solution
def modify(segment):
return [e+1 for e in segment]
def dominant(A):
d = dict()
lenOfHalfA = int(len(A)/2)
domList = []
for i in A:
if not i in d:
d[i] = 1
else:
d[i] = d[i]+1
for key, value in d.items():
if value > lenOfHalfA:
domList.append(key)
return domList
def solution(K, M, A):
# write your code in Python 3.6
dominantList = []
x = 0
while x <= len(A) - K:
modifiedA = A[:]
modifiedA[x:K+x] = modify(A[x:K+x])
dominantList += dominant(modifiedA)
x += 1
return list(set(dominantList))
• This question might benefit from an example and it's output. That would make the workings a bit clearer than just text. – Gloweye Oct 4 '19 at 14:12
• @Gloweye I have added some examples now. – Harith Oct 4 '19 at 18:21
• yes, looks a lot easier to understand the purpose. I've suggested a few edits that show it more like python lists. So essentially, the "leader" is the most common number, and the output is the most common number in either the current array ( increasing a slice of length zero) or any array with a certain slice incremented. – Gloweye Oct 4 '19 at 22:13
• Are you using list(set(dominantList)) to sort your list? – ades 8 hours ago
def modify(segment):
return [e+1 for e in segment]
This function is used only in one place, and is only one line. That often means it's better to just inline it:
modifiedA[x:K+x] = modify(A[x:K+x])
# Becomes:
modifiedA[x:K+x] = [e+1 for e in A[x:K+x]]
Use meaningful variable names. No matter what your challenge says, K, M and A are not meaningful variable names. It also seems like you're not doing anything with that M, so why do we even pass it to the function?
In your dominant() function, you look like you want to use collections.Counter. For practice with a language it can be good to check how to set it up yourself, but sometimes we have a good solution ready-made and available, and much better tested than any individual developer could ever do.
With Counter, you can make it work like this:
from collections import Counter
def dominant(string):
count = Counter(string)
return [key for key, value in count.items() if value == count.most_common(1)[0][1]]
Yeah, it's as simple as that. You could split it out over multiple lines, for clarity:
from collections import Counter
def dominant(string):
count = Counter(string)
dom_list = []
max_occurred = count.most_common(1)[0][1]
for key, value in count.items():
if value >= max_occurred:
dom_list.append(key)
return dom_list
• That often means it's better to just inline it - In this case I agree, but in general I don't. Functions are cheap, and inlining for performance is about the last thing you should try. The modularity, testability and legibility that functions offer is important, and well worth the tiny performance hit. The only reason I agree here is that the purpose of the function isn't well-defined. – Reinderien Oct 4 '19 at 14:37
• There's loads of cases where functions are appropriate, but it can also be premature optimization and/or a lot of scrolling to see what a line does. In my opinion, primary reason for functions is to avoid repetition, and secondary is to aid readability and make it easier to see what a block of code does. If there's no repetition to avoid and no readability gained with a function, then IMO it shouldn't exist. – Gloweye Oct 4 '19 at 14:46
• Would inline here even give a speed increase? I was under the impression that lambda functions appear identical to normal (local) functions in the bytecode. – ades 8 hours ago
You can speed some things up by using more list and dict comprehensions, and by reducing your function calls. Some examples follow.
d = dict() vs d = {}: 131 ns vs 30 ns.
len_of_half_a = int(len(a)/2) vs len_of_half_a = len(a)//2: 201 ns vs 99 ns.
I used Python 3.8.1 for both tests.
Granted that this isn't much, but several of these tiny improvements could help you reach the target. You should see similar if not better performance increases by using list and dict comprehensions, e.g. for domList, d, and dominantList. And replacing your while x <= len(A) - K: with a range-based iterator should bump you up a little more.
And a final small note is that you should try to follow standards with your naming and ensure clear and obvious names. A is not a good name for a variable, nor is d, and python tends to use snake_case over camelCase.
### Algorithm in O(N^2)
The current solution will be slow for large N, because it has a complexity of O(N^2) (It checks every element in the array for every possible position of the k adjusted elements => O(N * (N-K)) => O(N^2)).
### There is an O(N) solution.
Consider that as the K-element segment "slides" along the array, there are only two elements whose value changes: the one entering and the one leaving the segment.
Something like:
def solution(K, M, A):
threshold = len(A)//2
counts = Counter(A)
# increment count for each element in the initial window
leaders = set(k for k,v in counts.items() if v > threshold)
# slide window along the array adjusting the counts for
# elements that leave (tail) or enter (head) the window.
# An element entering gets incremented, so
# count[element] -= 1 and count[element+1] += 1.
# It is the reverse for element leaving.
for tail, head in zip(A, A[K:]):
counts[tail] += 1
counts[tail + 1] -= 1 |
This percentage can be multiplied by the estimated construction cost to determine an appropriate engineering and design cost estimate. Inflation Calculator. Marginal cost always intersects the minimum point on the average total cost curve. Lecture 13 Cost Functions Outline 1. There are many factors which must be considered when calculating these expenses. However, usually the main expense item is the COF. Check out the web's best mortgage calculator. x is the independent variable and y is the dependent variable. Required values include the age of the child in hours (between 12-146 hours) and the total bilirubin in either US (mg/dl) or SI (µmol/L) units. Calculate for quantity of items to fill for a total project need of cement in cubic feet and cubic yards. curves or life tables (actuarial procedures) were applied to physical property. com To create your new password, just click the link in the email we sent you. Can someone please guide me how to do this. Learning how to classify costs is the first step towards managing them and. You can use this calculator to determine the number of units required to break even. Assignment 2: Quantitative Exercises and Final Project 3: Government Securities Part One: Quantitative Exercises Barbow Enterprises, Inc. 2018's deepwater cost curve is lower and longer, with unit costs down by more than 50% since 2013. We spent more than 35 hours testing and. Wright described a basic theory for obtaining cost estimates based on repetitive production of assemblies. By simply adding $100 – as an example – to your payment every month, not only will the interest cost come down, but your repayment time shortens as well. The larger the level of output, the lower is the average fixed cost and smaller the level of output, the greater is the average fixed cost. Other sites in the eonor. To calculate marginal cost we need to know the total cost and the total output. Costs associated with a business operation can be broadly classed into two categories: variable and fixed. Each of the rates above are calculated for a given table, based on a single cutoff value. However, this is not a formal estimate. 600 and now the total cost of 4 units becomes Rs. Knowing what I wanted, but not entirely sure of how to accomplish it, I opened up Excel and plunked out a basic layout with inputs. Indifference Curve: An indifference curve is a graph showing combination of two goods that give the consumer equal satisfaction and utility. This calculator is based on Euler-Bernoulli beam theory. cost rises when quantity rises, and it falls when quantity falls. Estimates the energy production and cost of energy of grid-connected photovoltaic (PV) energy systems throughout the world. Fixed costs include expenses like insurance, property leases, permits and other services. You should confirm all information before relying on it. These numbers seem to be about manufacturing hardware. This website uses cookies to ensure you get the best experience. Invert Diagram of Moment (BMD) - Moment is positive, when tension at the bottom of the beam. A bell curve (also known as normal distribution curve) is a way to plot and analyze data that looks like a bell curve. Then use your calculator to find the length correct to four decimal places. The steps below will assist you in producing a flexible S-curve that can be used for a variable capex amount and construction period. Break Even = Fixed Costs/ (Selling Price per Unit - Variable Cost per Unit) Break Even Definition. If you're wondering just how much natural gas you use at home, we'll help you figure it out. Cost Models - Learning Curve Calculator Globalsecurity. Competitive Equilibrium. 205-26(a), and 31. To calculate total revenue for a monopolist, find the quantity it produces, Q* m, go up to the demand curve, and then follow it out to its price, P* m. Category Excel Sheets Templates. The LTC curve is made by joining the minimum points of short run total cost curves. In mathematics, the curve which does not cross itself is called as the simple curve. 1 "The Demand Curve of an Individual Household" is an example of a household's demand for chocolate bars each month. Here is the online curve calculator surveying which helps you to calculate the degree of curve easily. 33 per linear foot. Post Views: 3,989. What is the slope of a curve? In every courve's point , the slope of a courve is defined by the tangent line in that point. However, this is not a formal estimate. To get the CAGR value for your investment, enter the starting value or initial investment amount along with the expected ending value and the number of months or years for which you want to calulate the CAGR. Let's look at an example to understand this better:. S-curve is generated traditionally within the EVMS process and is the basis for evaluating the project's progress and performance. The total cost curve is generally bowed upwards. The main determinant in calculating COF is a funds transfer pricing (FTP) curve. Solved Question on Short Run Average Costs. -S-Curve Excel Template used for evaluation for the progress of the project. Labor hours for unit #4: 120 hours Labor hours for unit #1: 160 hours Rate of improvement: 10% Improvement curve slope: 90% 12 108 144 16 28) A measure of the amount of dispersion, or distance, between data points is the: C oefficient of variation (rel. We, therefore, pass on to the study of short-run average cost curves. At each price point, the total demand is less, so the demand curve shifts to the left. The static pressure curve provides the basis for all flow and pressure calculations. For example, a 100 watt light bulb operating on 120 volts AC will have 144 ohms of resistance and will draw 0. Try The Calculator. Enter the initial value x0, growth rate r and time interval t and press the = button:. This represents a 1. Calculate Time and Cost. If the cost of each of these outcomes is known. The data on the impacts of smoking, alcohol, physical activity, diet, stress and body mass index on life expectancy is taken from Public Health Ontario, "Seven more years" report, April 2012, pages 22-23 and is used as a basis for calculations. 30 represents a typical value for the sum of cost components 2 through 9 above. This calculator estimates what the car you are buying (or the car you own now) will be worth at some future point. Join 100 million happy users! Sign Up free of charge:. Applets' Home Kaskosz Home Math Home. A cost improvement curve is based loosely on the idea of a learning curve. You can use this calculator to determine the number of units required to break even. Using our natural gas usage calculator you can determine where you may be able to cut back or upgrade to more energy efficient products. If you find other financing after you buy, use our 3‑day payoff program. Benefit-Cost Analysis (BCA) is the method by which the future benefits of a hazard mitigation project are determined and compared to its costs. As production is expanded to a higher level, it begins to rise at a rapid rate. Using variables. Simple S-Curve Posted by Ally ⋅ August 27, 2011 ⋅ 6 Comments Filed Under Business Apps , Business tools , Innovation Management , management apps , Management tools , Market Penetration , new product innovation , S-curve , S-curve using Excel , Simple S-curve. When the project work begins, construction. Curve Your Enthusiasm How to plot OC curves in Excel. Marginal cost =$1,000 / 500. properties of the LID controls, cost curves (simple, typical, and complex) were developed for each of the 7 LID controls included in the calculator, using nationally available unit cost estimates. 691: Delta: 0. 25 16 2664 17 2882. 7 HSPF Electric Furnace/Strip Heat - 99% AFUE Propane - Stand Furnace (80% AFUE) Propane - HE Furnace (92% AFUE) Propane - Stand Boiler (82% AFUE) Propane - Condensing Boiler (92% AFUE) Propane - Wall Hung Boiler (96% AFUE) FuelOil - Stand. 156 calculators. Using these principles, WorkPack first creates a planned S Curve when a project schedule is created. For example, a 100 watt light bulb operating on 120 volts AC will have 144 ohms of resistance and will draw 0. Since marginal cost equals the slope of the total cost curve (or the total variable cost curve), it equals the first derivative of the total cost (or variable cost) function. The industry has taken significant steps to drive change by re-evaluating project design and improving well performance. No permanent modification to your guns is required. For most deck types, it is desirable to have a few one casting cost spells, a few more two casting cost spells, several spells that cost three to five mana, and then only a few that cost more. , you are looking to spend about $200 for 100 feet of cable. This relationship between marginal cost and AVC can be used to predict the interplay of marginal cost and average variable cost curves. Long run average cost is long-run total cost divided by the level of output. The market demand curve is the summation of all the individual demand curves in a given market. Andler are given credit for their in-depth analysis. Tell us your current city, where you'd like to move and enter a salary amount. the two points ( 7, 4) and (1, 1). An s-curve is a graph of cumulative man-hours, cost, progress, or other quantities plotted against time. Deriving TC(Q) and graphing per-unit cost curves given a cubic TC curve$600 Figure 1. You can use a graphing calculator on Section 1, Part B and Section 2, Part A of the AP Calculus AB Exam since questions in those parts of the exam require use of the calculator to answer. (Exhibit: Short-Run Costs) Curve B is the _____ cost curve. I thought about taking a basic circumference which would yield: C= 2 pi r = 2 pi (2) = 4 pi <<<<0 or decay rate when r<0, in percent. The slope intercept form for this line is y =. cost rises when quantity rises, and it falls when quantity falls. What is Marginal Cost? Marginal cost is the cost of producing one additional unit. Average Total Cost= $1973 So if you see here, as we increase the number of cars, the average total cost per car dropped. Learning how to classify costs is the first step towards managing them and. Convert units (to, into, per, /): 15 feet into inches = 180 inches. A random classifier has an area under the curve of 0. The smaller the radius, the tighter the curve. People who downloaded this item also downloaded. The slope intercept form for this line is y =. Beyond simple math and grouping (like " (x+2) (x-4)"), there are some functions you can use as well. A U-shaped short-run Average Variable Cost (AVC) curve. ROC curve analysis in MedCalc includes calculation of area under the curve (AUC), Youden index, optimal criterion and predictive values. Deriving TC(Q) and graphing per-unit cost curves given a cubic TC curve$600 Figure 1. Think of marginal cost as the cost of the last unit, or what it costs to produce one more unit. Tom likes to collect Batman and Superman comic books. ” Need help with a homework or test question?. The method used follows a regional approach for determining corn N rate guidelines that is. Required inputs include nameplate pump performance, efficiency, motor load, annual operating hours, pump system type, and cost of electricity. Calculus Volumes 1, 2, and 3 are licensed under an Attribution-NonCommercial-Sharealike 4. Enter your data as (x,y) pairs, and find the equation of a line that best fits the data. The calculator uses the learning curve to estimate the unit, average, and total effort required to produce a given number of units. The Drupal learning curve is often regarded as difficult, and rightfully so. It provides you with two important advantages. 145 calculators. Point Elasticity along a Constant Elasticity Demand Curve. Gumerman Russell L, Gulp Sigurd P. The LAC is U-shaped but is flatter than tile short run cost curves. In terms of Influence Curves cost is the cost to the client, therefore, whenever possible rates tendered by the contractor should be used to assess the direct cost. Mortgage Example. With a bit of manipulation this S-curve can then be 'stretched', 'compressed' and 'offset' across the construction periods in a financial model to give the required profile. Man Hours or Costs) versus Time, or as percentage values versus Time. The total variable cost curve illustrates the graphical relation between total variable cost and the quantity of output produced. [email protected] Use the NEB Tm Calculator to estimate an appropriate annealing temperature when using NEB PCR products. For example, a 70% learning curve implies a 30% decrease in time each time the number of repetitions is doubled. Check the resulting collection efficiency, pressure drop and operating cost. Supply and Demand Calculator The Calculator helps calculating the market equilibrium, given Supply and Demand curves In microeconomics, supply and demand is an economic model of price determination in a market. NREL's PVWatts ® Calculator Estimates the energy production and cost of energy of grid-connected photovoltaic (PV) energy systems throughout the world. To see how your practice's E/M coding affects your potential revenue, use the AAP Coding Calculator. This empirical rule calculator can help you determine if a given data set follows a normal distribution by checking if 68% of data falls within first standard deviation (σ), 95% within first 2 σ and 99. Simply put, you will break the project work into its smallest work components. Select either the upper or lower calculator then follow the directions to size your valve. Of course, like all things accounting, depreciation can be tricky and it’s impossible to remember all the intricate details. 88 Ma, coinciding with the Eocene-Oligocene climate transition (EOT) at Ëœ 33. Get the right asset allocation. According to the accounting department, the following items and their respective. With the input Labor (L) and Capital (K), the production cost is w ×L + r ×K. Supply curve. It is because AVC is the average marginal cost and a marginal cost lower than AVC causes it to decline. A cost improvement curve is based loosely on the idea of a learning curve. ----- EPA-600/2-79-162b August 1979 ESTIMATING WATER TREATMENT COSTS Volume 2. Planning Cost Calculator listed as PCC. must equal average total cost. dis) 29) Which of the following is a phase in the allocation cycle?. This fit gives greater weights to small values so, in order to weight the points equally, it is often better to minimize the function. The average variable cost formula is AVC = VC(Q). Area from a value (Use to compute p from Z) Value from an area (Use to compute Z for confidence intervals). They calculate the cost of the remaining work and informs you that from now it will take 70,000 USD to finish building the house. 60 in Figure 1. Deriving TC(Q) and graphing per-unit cost curves given a cubic TC curve$600 Figure 1. The marginal cost curve, THE focal point for the analysis of short-run production, can be derived. Example II. The fuel cost curve specifies the cost of fuel used per hour by the generating unit as a function of the unit's MW output. For turbulent flow (R > 3000 in pipes), f is determined from experimental curve fits. curves or life tables (actuarial procedures) were applied to physical property. Set Project Zip Code Enter the Zip Code for the location where labor is hired and materials. The total cost curve is generally bowed upwards. In practice, most of the classification models have an AUC between 0. Find the length of the arc in one period of the cycloid x = t – sin t, y = 1 – cos t. It is suitable for calculating the average speed of a car, a truck, a plane, and other vehicles, as well as running, biking, or. This fit gives greater weights to small values so, in order to weight the points equally, it is often better to minimize the function. Average variable costs represent a company's variable costs divided by the quantity of products produced in a particular period of time. Add, multiply, transpose matrices and more. Half Life Calculator. Calculate Time and Cost. The cost of debt is the return that a company provides to its debtholders and creditors. The overall marginal abatement cost curve is the horizontal sum of the individual abatement cost curves just as the supply curve is the horizontal sum of the marginal cost curves of different firms. Annual Fee Payoff Prorating Calculator This is the number of months of annual fee due Rural Development since the last bill. Although the s-curve drives from the S-like shape of the curve, don't be surprised if your s-curve is not in the shape of an "S". increasing in quantity). You can use this calculator to determine the number of units required to break even. 8) decreases sharply with smaller Q output and reaches a minimum. Market Demand. For full step-by-step work, you'll need to upgrade to their premium membership. Marginal cost is a concept that's a bit harder for people grasp. We, therefore, pass on to the study of short-run average cost curves. t is the time in discrete intervals and selected time units. Required inputs include nameplate pump performance, efficiency, motor load, annual operating hours, pump system type, and cost of electricity. If it is a 70% curve, it represents a 30% reduction in effort with every doubling of experience. 93 in 1980 and$58. r is the risk free rate. This was done using ‘Remaining Ratio’ and ‘S-Curve Multiple’ concepts, combined with the Revised S-Curve Ratios (i. It is different from the average cost of capital which is based on the cost of equity and debt already issued. The marginal cost curve intersects the average cost curve exactly at the bottom of the average cost curve—which occurs at a quantity of 72 and cost of $6. com To create your new password, just click the link in the email we sent you. Notice that the marginal cost of milk to Joe will be$1. 1: The total cost to produce the first unit of the good is $1 (Area 1). Historical 30-YR Mortgage Rates. Total revenue is usually depicted as a total revenue curve with it being directly related to marginal revenue and average revenue. Welcome to step 1: our Lithium-Ion battery calculator. Distribute copies of the warm-up activity. Old When Purchased" enter the number of years old the car is now. Change in total cost is$40 and change in quantity is 1,000. Estimate how long it will take to mow a lawn given a mower size, speed, and lawn size. So when total cost is 34Q3 - 24Q + 9, marginal cost is 102Q2 - 24, and when total cost is Q + log(Q+2), marginal cost is 1 + 1/(Q+2). Capturing this data can be. say for example y = 30 * 3 -0. Assignment 2: Quantitative Exercises and Final Project 3: Government Securities Part One: Quantitative Exercises Barbow Enterprises, Inc. Bank of Japan Rates. Think of marginal cost as the cost of the last unit, or what it costs to produce one more unit. The weighted average ( x) is equal to the sum of the product of the weight (w i) times the data number (x i) divided by the sum of the weights: Find the weighted average of class grades (with equal weight) 70,70,80,80,80,90: Since the weight of all grades are equal, we can. Click below to download the free player from the Macromedia site. In particular, marginal cost is the slope of the total cost curve. Here is the online curve calculator surveying which helps you to calculate the degree of curve easily. If you are looking for how NPV and IRR are derived, you will appreciate what the author, Dr. That means putting in the same amount of money every month in, say, a stock market index fund that buys shares in all the. The user can set the amortization rate which affects how the dollar value of maintenance and future replacement costs affects lifecycle costs. The eGallon price arms consumers with a little bit more information to compare the costs of driving an electric car to the cost of gasoline, but it doesn’t measure some of the other benefits of driving on electricity. Old When Purchased" enter the number of years old the car is now. Purchase dates can be entered only up to 31. The NCES Kids' Zone provides information to help you learn about schools; decide on a college; find a public library; engage in several games, quizzes and skill building about math, probability, graphing, and mathematicians; and to learn many interesting facts about education. However, should more expensive approaches be required to reach the abatement goal, the cost could be as high as 1,100 billion euros, 1. When private and external costs are paid by the firm, the marginal social cost curve (dotted red line) is created by adding the marginal external costs to the marginal private costs. Car Depreciation Calculator Details Last Updated: Sunday, 18 November 2018 You can use this car depreciation calculator to estimate the first year and total depreciation on a car you're thinking about buying or already own. The purpose of home blood glucose curves is to determine if your cat is on the optimum insulin dosage for its lifestyle. The result is, a combined Actuals + Forecast draw schedule that blends what has been spent to date with a revised forecast for what is left to be spent. This calculator is designed to give a reasonably accurate model of costs involved with the process of repetitive learning. 6 10 6) = q p / (3. Bank of Japan Rates. Refinance Calculator See the difference a new loan can make. Point Elasticity along a Constant Elasticity Demand Curve. This free probability calculator can calculate the probability of two events, as well as that of a normal distribution. Tip To calculate an entire market's labor demand curve, add the labor demand curves of the firms in the market. There is a relationship between a project's time to completion and its cost. One such fit is provided by Colebrook, The solutions to this equation plotted versus R make up the popular Moody Chart for pipe flow, The calculator above first computes the Reynolds Number for the. Upper Calculator Instructions. To find out how many items you'll have to sell to bring in enough money to break even with the expenses to make the item, fill in the three fields of the Break Even Calculator. At each price point, the total demand is less, so the demand curve shifts to the left. Other economic models have the total variable cost curve (and therefore total cost curve) illustrate the concepts of increasing, and later diminishing, marginal return. Search by handwriting. It depicts the per unit cost of producing a good or service in the long run when all. dis) 29) Which of the following is a phase in the allocation cycle?. Ko, can offer you in the booklet. com To create your new password, just click the link in the email we sent you. 8" Block yields 7,14 r-value. Bankrate has answers. Manually calculating this requires that you multiply the total amps of all components by the total volts of all components. The overall marginal abatement cost curve is the horizontal sum of the individual abatement cost curves just as the supply curve is the horizontal sum of the marginal cost curves of different firms. , mutual funds, exchange traded funds, certificates of deposit, savings accounts, money market accounts, savings bonds, etc. Polynomials Least-Squares Fitting: Polynomials are one of the most commonly used types of curves in regression. Estimates the energy production and cost of energy of grid-connected photovoltaic (PV) energy systems throughout the world. Calculating the slope of a line is a cinch with our online slope calculator. Using our natural gas usage calculator you can determine where you may be able to cut back or upgrade to more energy efficient products. Natural Gas Usage Calculator. For a basic project in zip code 47474 with 25 linear feet, the cost to Install Concrete Curb And Gutter starts at $43. The method used follows a regional approach for determining corn N rate guidelines that is. Login with Google to sync & backup your calcs. In fact, because of the advanced capabilities of the HP-41C, it can even be called a personal computing system. What is Marginal Cost? Marginal cost is the cost of producing one additional unit. Curb Ramp Width. Click “Calculate Expected Value. Tom likes to collect Batman and Superman comic books. Thank you for your interest in our walkway cost calculator. It can be plotted as an acceleration over time curve. How long will it take to produce the 5th home? How about the 10th home? What about the 100th home? What about the 104th home? First, we calculate the learning curve factor b = ln(p)/ln(2) = ln(0. Screenshot @ Imgur. The slope of the curve y = x2−3 at the x value of 1 is 2. Single-User or Company License, which one is the best for me? Fully working version where you change the company information and company logo. Use the bell curve formula to calculate the y axis value for each x axis value. Learning Curve Calculator This learning curve calculator provides a quick and straightforward method of calculating the total cost or time it will take for a certain repetitive process in a manufacturing environment at a distinguished learning rate, the time at which the first unit will be produced, and the number of units. In a free market economy, productively efficient firms use these curves to find the optimal point of production, where they make the most profits. Access Google Slides with a free Google account (for personal use) or G Suite account (for business use). S-curve calculator : 1 parameter estimate The solution of the simple logistic curve is given by the formula : The parameters are: upper asymptote M (i. It is calculated in the situations when a company meets its breakeven point. 0 • Technical Manual for the NCCA S-Curve Tool Version 1. 30 represents a typical value for the sum of cost components 2 through 9 above. By purchasing this spreadsheet you agree to the disclaimer and terms & conditions. MedCalc statistical software for biomedical research, including ROC curve analysis, method comparison and quality control tools. 2 (Qs) shifts the supply curve downwards so it starts at the 0,0. 2018 and 31. FMV or Fair market value is entered by the user as per. monopoly profit is maximized. Type in your information and get results. We’re monitoring these stock issues and researching alternate options. curves or life tables (actuarial procedures) were applied to physical property. It shows cumulative costs expended over time for the duration of the project, and may be used to assist in the calculation of the project's cash flow, and cost to complete. Total Variable Cost =$7,500,000. Trex Custom Curve Oven allows us to curve the decking and railings. monopoly profit is maximized. Point Elasticity along a Constant Elasticity Demand Curve. Enter the number and wattage of the lamps and the average amount of time they are on. An unregulated market leads to equilibrium price and quantity determined at the intersection of the supply, or marginal private cost (MPC), curve and the demand curve: P1, Q1. Opportunity cost is the value of the next best alternative or option. Then, since the marginal cost curve intersects the average total cost curve at the minimum of average total cost, the rm must be producing at the minimum of average total cost. Length Calculators. You can take the log of both sides of the. As production is expanded to a higher level, it begins to rise at a rapid rate. This calculator calculates the cost to install a PV system based on the price, efficiency and area of its PV modules, as well as the balance of systems (BOS) costs. Basic Commands (click for demo) Number shortcuts. Creating a formula that distributes a cost along a curve with specified duration of months This would be very easy if all costs were assumed to be paid evenly across the duration of the item. Since marginal cost equals the slope of the total cost curve (or the total variable cost curve), it equals the first derivative of the total cost (or variable cost) function. by David Phillips. This plot simply shows the incremental cost curves and present operating points of all generators in the same area as the generator on which you clicked. Find the length of an arc of the curve y = (1/6) x 3 + (1/2) x –1 from. Deriving TC(Q) and graphing per-unit cost curves given a cubic TC curve$600 Figure 1. In the short run, at a market price of$8 per shirt, this firm will choose to produce 12,000 shirts per day. The total fixed cost curve graphically represents the relation between total fixed cost incurred by a firm in the short-run production of a good or service and the quantity produced. The page includes a nice calculator for estimating price reduction after X units. d) Maximum value occurs at (1, 4). Operating conditions, wage scales, and unit prices are typical for western U. To use the curves, estimate construction costs for a project. 145 calculators. In simple words, an isocost line represents a combination of inputs which all cost the same amount. Time-Cost Trade-offs. R F is the forward interest rate assuming that it will equal the realized benchmark or floating rate for the period between times T 1 and T 2. Marginal cost is a concept that's a bit harder for people grasp. com llc network:. The graph above shows 3 short run average total cost curves, and their relationship to the long run average total cost curve. Helical Gear and Pinion Calculator and Equations This calculator will determine the following design variables for a gear and pinion. Area between Curves Calculator. Upper Calculator Instructions. Use the upper calculator if you prefer to enter the Jack size, type of jack, contract speed, capacity and empty car weight. The break-even point is the number of units that you must sell in order to make a profit of zero. To use the application, you need Flash Player 6 or higher. The State Savings Calculator provides a means of further tailoring state-level cost-effectiveness analysis. Calculate the electricity costs of a light bulb or light fixture. They are mostly standard functions written as you might expect. Calculating the cost of preventive disinfecting of coronavirus for commercial buildings involves much more than just square footage, number of doors. Use the upper calculator if you prefer to enter the Jack size, type of jack, contract speed, capacity and empty car weight. The average cost per unit is $10, but the marginal cost of the 101st unit is$20. To see how your practice's E/M coding affects your potential revenue, use the AAP Coding Calculator. To use it, just enter any two dates from 1913 to 2020, an amount, and then click 'Calculate'. · Marginal cost eventually rises with the quantity of output. These capital providers need to be compensated for any risk exposure that comes with lending to a company. 7% within first 3 σ. In most production scenarios, the graph is shaped like a "J. Choose Round Column (or Round Slab) on the calculator. Plane Closed Curve: PCC: Process Chemistry Cell: PCC: Payroll Closing Coordinator: PCC:. The model was developed by Ford W. For more details on The Advance Curve, please click on the Advance Curve Details button above, located on the left below the video. There is a relationship between a project's time to completion and its cost. Cost of living wizard. usually range from $11,411 to$14,874 after solar tax credits, and the average price per watt for solar panels ranges from $2. In light of the current pandemic, many politicians have tried everything to flatten the curve and reduce the number of COVID-19 related deaths. Several assessments are included with the guidelines, models, databases, state-based RSL Tables, local contacts and framework documents used to perform these assessments. The above calculations presume a 20% down payment on a$250,000 home & a closing cost of $3,700 which is rolled into the loan. Calculating Marginal Cost using Calculus. Approximate travel distance is also calculated. G o t a d i f f e r e n t a n s w e r? C h e c k i f i t ′ s c o r r e c t. Other sites in the eonor. This value may or may not be measured in money. Example 3: Polar. The State Savings Calculator provides a means of further tailoring state-level cost-effectiveness analysis. 30) The number 0. Calculate the average variable cost. Trex Custom Curve Oven allows us to curve the decking and railings. I'm trying to draw a smooth curve in R. How the Derivative Calculator Works. Using our natural gas usage calculator you can determine where you may be able to cut back or upgrade to more energy efficient products. Curb Ramp Length, W. Before a study is conducted, investigators need to determine how many subjects should be included. The Long-Run Marginal Cost Curve (LRMC): The long run marginal cost curve shows the minimum cost incurred per unit change in output when all factors of production are variable. Graph the marginal cost curve for producing tie-dyed t-shirts. Welcome fellow Mustachians! Inspired by the sage teachings of Mr. 25---an increase in his consumption by 1 litre adds$1. Cost Curve Calculator is a Java based application designed to analyze the changes in a company's revenues, as well as fixed and variable costs. A high resolution setting for low pressure black powder calibers and shotguns will read to 20,000 PSI. Curb Ramp Slope. Do not enter any number with a slash "/ "! Enter 1 for a is the equation has the form: x + b = c. Clearly, given a normal distribution, most outcomes will be within 3 standard deviations of the mean. mean as 950, standard deviation as 200 and x as 850, we just need to plug in the figures in the. The Mana Curve The point of a mana curve is to be able to play a spell every turn, rather than waiting for several turns before you really begin the game. The ASIC Cost Calculator found on Sigenics. Our Amortization Schedule Calculator gives you a full amortization schedule & chart. This affects revenue, quality measurement, and incentive based compensation programs that utilize claims data. 201 The answer is y = 24. Rate Curves What's the Difference and Why Does it Matter? 6. Answer the following questions about cost curves: a. First of all, we introduce a measurable learning function \$$L\\left( t \\right). Actual costs will depend on job size, conditions, and options. Hansen Culp/Wesner/Culp Consulting Engineers Santa Ana, California 92707 Contract No. 84% annual increase from the fourth quarter of 2018, respectively. mining operations. The US Inflation Calculator below measures the buying power of the dollar over time. If you specify more than one dollar amount on the line, only the first will be used for this calculation, since Delta does not give you miles for government taxes and fees. ----- EPA-600/2-79-162b August 1979 ESTIMATING WATER TREATMENT COSTS Volume 2. graphing- calculator. The average cost to install 100 feet of heat cable on the roof only is about 300-400 (labor only, excludes materials). Example 2: Parametric. P (t) = the amount of some quantity at time t. Noticed that the fixed cost curve is flat and the variable cost curve has a constant upward slope. For those with a technical background, the following section explains how the Derivative Calculator works. 24 (3,560 / 15,000). Bloomberg delivers business and markets news, data, analysis, and video to the world, featuring stories from Businessweek and Bloomberg News. You can use the following calculators to compare 10 year mortgages side-by-side against 15-year, 20-year and 30-year options. On one diagram, draw the AVC, ATC, and MC curves. Choose Round Column (or Round Slab) on the calculator. excel formula get original price from percentage discount this is a small list but if you sell or resell lot of product the following can be 00031 how to calculate manpower required for project using microsoft basic solution below implements table serve as visual confirmation that based provided above reasonable attached post an document simply. What is the relationship between the law of diminishing marginal returns and the slope of the marginal cost curve? c. Use the upper calculator if you prefer to enter the Jack size, type of jack, contract speed, capacity and empty car weight. *based on approximate lease cost of 2100/mo. Price Elasticity Calculator (Midpoint Method) Elasticity and Logs. Explore the cost of living and working in various locations. Stockists. How do I find the slope of a curve at a point? How do you find the slope of a curve at. Assumption 2: Cost projections are constant. -S-Curve Excel Template used for evaluation for the progress of the project. Suppose the cost of 3 units of a commodity is Rs. This is due to the assumption that in ideal conditions costs will be proportional to progress achieved. The following is the exponential decay formula: P (t) = P 0 e -rt. Cost reduction only occurs if management action is taken, for example, to increase the rate of time reduction by providing additional training,. We've assumed investors sell 20% of holdings in their taxable account every year, incurring long - and short-term taxes along the way. Marginal cost: It is the rate of change of the total cost of production that arises when the quantity produced is incremented by one unit. Marginal Cost Of Funds: The marginal cost of funds captures the increase in financing costs for a business entity as a result of adding one more dollar of new funding. To see how this works, suppose a pollutant comes from three firms with the following marginal cost curves for pollution abatement: MC1 = 10*Q1. Generate, compare, and plot S-curves, using estimated or historically-based coefficients of variation and cost growth factors, to support cost risk analysis using Monte Carlo simulation and the enhanced Scenario Based Method • User Guide for the NCCA S-Curve Tool Version 1. Consequently, total cost is fixed cost (FC) plus variable cost (VC), or TC = FC + VC = Kr+Lw. Common Uses of S-curves S-curves are great graphical project management tools for planning, monitoring, controlling, analyzing, and forecasting project's status, progress, & performance. Stormwater Runoff Calculator. Agora Energiewende | Current and Future Cost of Photovoltaics 6 Solar photovoltaics is already today a low-cost renewable energy technology. Change in total cost is 40 and change in quantity is 1,000. Marginal cost is ΔTC/ΔQ; Average Total Cost, ATC, equals TC/Q; Average variable cost, AVC, equals VC/Q; average fixed cost FC/Q. Learning how to classify costs is the first step towards managing them and. To calculate tax incidence, we first have to find out whether the tax we are looking at shifts the supply or the demand curve. Curve Your Enthusiasm How to plot OC curves in Excel. One such fit is provided by Colebrook, The solutions to this equation plotted versus R make up the popular Moody Chart for pipe flow, The calculator above first computes the Reynolds Number for the. Welcome fellow Mustachians! Inspired by the sage teachings of Mr. Scientific notation. Simply put, it tells a business at what point it covered all the cost of doing business, and subsequently, starts making profits. Amortization Calculator. MTG Mana Curve Calculator. Labor and benefits are 22 per hour, and the task requires two skilledworkers. The market supply curve shows the combined quantity supplied of goods at different prices. Approximate travel distance is also calculated. Using Figure 8-9, calculate the firm's approximate average total cost when it produces 50 units. monopoly profit is maximized. The concept of the learning curve was introduced to industry for pre-manufactured assemblies in 1936 by T. Yet if a hypothetical federal agency were to propose some public health regulation to "flatten the curve" at a cost to the economy that was shown to be less than 311,000 per statistical life. Login with Google to sync & backup your calcs. Armed with your calculations, you can now plot a marginal cost curve. initial stock, initial value), and time t. The Marginal Revenue curve is sloping downwards because, with one additional unit sold, we would generate revenue close to our normal revenue but as. Since this index is a monthly average of the one-year CMT yield, it is less volatile than daily interest rate movements but more volatile than other indexes such as the 11th District Cost of Funds. Learning Curve Calculator. Thus, average total cost initially decreases, and then begins to increase, resulting in a U-shaped curve. The model was developed by Ford W. By using this website, you agree to our Cookie Policy. The formula used by this calculator to calculate the area of a rectangular shape is: A = L · W. Formula - How to Calculate Marginal Cost. The Euler-Bernoulli equation describes a relationship between beam deflection and applied external forces. Other names for f '(x): slope instantaneous rate of change speed velocity EX 2 Find the derivative of f(x) = 4x - 1. You may have learned about improvement curves using the name learning curve analysis. If P< min AVC then set Q=0 (shut down temporarily, P=-FC) The Short Run supply curve has two segments. CM is the minimum cost at which optimum output OM can be, obtained. Welcome to step 1: our Lithium-Ion battery calculator. Guidelines for Developing a Marginal Abatement Cost Curve August 2014 5 Introduction Understanding MACCs Quite often an agency will be faced with a number of abatement project options with different resourcing requirements. 0 • Technical Manual for the NCCA S-Curve Tool Version 1. This calculator estimates what the car you are buying (or the car you own now) will be worth at some future point. It indicates an incremental cost change. The inputs would be 1) Budget Item, 2) Amount Per Item, 3) Cash Flow Distribution Method (Straight-line, S-Curve, and Manual Input), 4) Start Month (when cash flows for each budget item begin), and 5) Length (how many months cash flows will occur for each budget. We spent more than 35 hours testing and. GraphCalc allows you to graph 2D and 3D functions and equations as well as find intersects and create table values. However, should more expensive approaches be required to reach the abatement goal, the cost could be as high as 1,100 billion euros, 1. 5 = Total Square feet. Find the construction cost on the horizontal axis and, using the appropriate curve for either force account or contract work, read the associated percentage of engineering and design services from the vertical axis. A number of the ideas were borrowed from TL. In all productive processes, there is consistent improvement in worker performance, as the process is repeated multiple times. GraphPad Prism. Calculate the electricity costs of a light bulb or light fixture. Think of marginal cost as the cost of the last unit, or what it costs to produce one more unit. Florida KidCare is the umbrella brand for the four government-sponsored health insurance programs – Medicaid, MediKids, Florida Healthy Kids and the Children’s Medical Services Managed Care Plan – that together provide a seamless continuum of coverage for Florida children from birth through the end of age 18. 5 HSPF Stand Efficiency Heat Pump - 8. The result is the total watts that your PC build requires. For example, suppose it costs 1000 to produce 100 units and 1020 to produce 101 units. Again, your fixed, fixed costs are fixed and your workers are increasing, I mean, and your output is increasing. Using the Steam Property Calculator, properties are determined using Inlet Pressure and the selected second parameter (Temperature, Specific Enthalpy, Specific Entropy, or Quality). Values displayed are illustrations only based on 7% APR and do not. Tell us your current city, where you'd like to move and enter a salary amount. If PQ≥PC, then P≥AVC, and this is the cutoff P in the short run. Curve numbers. 3, where marginal product curve (MP) of the variable factor labour is shown at the top panel, the marginal cost curve is shown in the panel at the bottom of the figure is labelled as MC. Indifference Curves; Indifference Curves for Utility Functions; Cobb Douglas Utility (3D) Perfect Complements Utility (3D) Perfect Substitites Utility (3D) Quasilinear Utility (3D) Concave Utility (3D) MRS and Marginal Utility (3D) MRS Along an Indifference Curve (3D). Having convinced you that it’s a good idea to home blood test, and (hopefully!) having shown you how , it’s now time to do a full “Blood Glucose Curve”. Despite these factors, most homeowners can expect to spend anywhere from 9,500 – 12,500 on a curved stairlift, and with add-ons, 11,500 – 15,000. Time-Cost Trade-offs. Be sure to scroll down to see your voltage drop results. On an annualized basis, this means that Pete doesn’t cost the company 95K; rather, Pete roughly costs the company around 190K/year (95K x 1. Curb Ramp Width. Whether you’re heading to a campsite, park, mountain, or beach, a hammock can make your day that much more relaxing. Some pediatric practices tend to undercode, especially with Evaluation and Management (E/M) codes. A supply curve represents the quantity supplied at each price. The calculator uses the learning curve to estimate the unit, average, and total effort required to produce a given number of units. The main determinant in calculating COF is a funds transfer pricing (FTP) curve. For example, if the profit made from the sale is 5,000, the calculation is 5,000 divided by 10,000 -- 50 percent. This program calculates the dimensions needed to design ADA curb ramps. Marginal cost = 2 which means the marginal cost of increasing the output by one unit is 2. These capital providers need to be compensated for any risk exposure that comes with lending to a company. And, with a shift in demand, the equilibrium point also changes. First, a parser analyzes the mathematical function. Ohms Law Calculator Enter any two known values and press "Calculate" to solve for the others. It is one of the oldest classical production scheduling models. Ownership, Maintenance, Consumables and Operator. One such fit is provided by Colebrook, The solutions to this equation plotted versus R make up the popular Moody Chart for pipe flow, The calculator above first computes the Reynolds Number for the. Profit-maximizing firms use cost curves to decide output quantities. It describes the cost per unit of output. The marginal cost curve, THE focal point for the analysis of short-run production, can be derived. The fuel cost curve specifies the cost of fuel used per hour by the generating unit as a function of the unit’s MW output. The annual contribution limit is 19,000 for tax year 2019, with an. If you know several sets of prices you sell an object for matched with the quantity demanded at that price, then you can create your demand curve. If you find other financing after you buy, use our 3‑day payoff program. · The marginal-cost curve crosses the average-total-cost curve at the efficient scale. With calculators added regularly, CalcuNation strives to have the. \$$ This function, for example, may describe the current labour productivity of an employee. The Drupal learning curve is often regarded as difficult, and rightfully so. Each of the rates above are calculated for a given table, based on a single cutoff value. A random classifier has an area under the curve of 0. The purpose of home blood glucose curves is to determine if your cat is on the optimum insulin dosage for its lifestyle. dis) 29) Which of the following is a phase in the allocation cycle?. Its findings are that significant greenhouse gas reduction is achievable and affordable but requires urgent action to implement the full potential. Website includes a virtual library of online reference materials and photographs on photovoltaic technology, educational materials, links to commercial suppliers and other web resources. org The calculator uses the learning curve to estimate the unit, average, and total effort required to produce a given number of units. There are many factors which must be considered when calculating these expenses. A number of the ideas were borrowed from TL. Planned profit increases because you don't need to invest as much to gain the same amount of revenue. Curb Ramp Slope. For turbulent flow (R > 3000 in pipes), f is determined from experimental curve fits. The ideal hydraulic power to drive a pump depends on. This affects revenue, quality measurement, and incentive based compensation programs that utilize claims data. You can take the log of both sides of the. Point Elasticity along a Linear Demand Curve. While marginal revenue can remain constant over a certain level of. Another concept to learn in short-run average costs is Marginal Cost. curves or life tables (actuarial procedures) were applied to physical property. The overall marginal abatement cost curve is the horizontal sum of the individual abatement cost curves just as the supply curve is the horizontal sum of the marginal cost curves of different firms. 1 Characteristic curve. Land Computer is a powerful, yet easy-to-use complete set of land measurement and survey tools for Android devices that is a favorite among field workers, farmers, engineers, GIS students and professionals. Concrete Landscape Curbing – Total Average Cost per square foot. 33 per linear foot. The marginal unit is the last unit. \\) This function, for example, may describe the current labour productivity of an employee. Although the s-curve drives from the S-like shape of the curve, don't be surprised if your s-curve is not in the shape of an "S". This web site provides a process to calculate economic return to N application with different nitrogen and corn prices and to find profitable N rates directly from recent N rate research data. Calculate your payment rate on interest and principal, and determine what your payments will be over time. 4) the average fixed cost curve gradually falls from left to right showing the level of output. This is a dynamic empirical method of determining GP spherical lens power, base curve radius, overall diameter and peripheral curve values for patients manifesting ≤2D of corneal cylinder. Our online tool makes break-even analysis simple and easy. (2) Calculate total cost by adding. 401(k) Calculator A 401(k) can be one of your best tools for creating a secure retirement. 5, while AUC for a perfect classifier is equal to 1. 0 International License (CC BY-NC-SA), which means you can share, remix, transform, and build upon the content, as long as you credit OpenStax and license your new creations under the same terms. The data on the impacts of smoking, alcohol, physical activity, diet, stress and body mass index on life expectancy is taken from Public Health Ontario, "Seven more years" report, April 2012, pages 22-23 and is used as a basis for calculations. 0 Points (Exhibit: Short-Run Costs) Curve A crosses the average variable cost curve at: Correct Answer Key: A Question 18 of 20 0. I then select all the total cost data and create a line chart. To use it: In "Car Price," enter the total price of the car when it was purchased. GPA Calculator. Electric Car Fuel Cost Comparison Calculator Tweet If you are thinking of switching your conventional combustion engine car, to a plug-in hybrid electric (PHEV) or fully electric car (EV) then you need to take into account how much money on fuel you can save. point E, shows the average cost of producing this quantity. Land Computer is a powerful, yet easy-to-use complete set of land measurement and survey tools for Android devices that is a favorite among field workers, farmers, engineers, GIS students and professionals. We'll provide a cost of living comparison that includes food, housing, utilities, transportation. NREL's PVWatts ® Calculator Estimates the energy production and cost of energy of grid-connected photovoltaic (PV) energy systems throughout the world. 705 kilowatt hours per gallon (kWh/gal) of gasoline equivalent 1. Cost Curves Applicable to 1 to 200 mgd Treatment Plants by Robert C. Compare cost of living in 2 cities. Instead of creating the table we did in the example above, we can calculate marginal cost of a unit directly using calculus. Do not enter any number with a slash "/ "! Enter 1 for a is the equation has the form: x + b = c. One relationship exists between the marginal product curve and the marginal cost curve. The slope of the short-run total cost curve equals the slope of the short-run variable cost curve at every output. This calculator is designed to give a reasonably accurate model of costs involved with the process of repetitive learning. Quantitative information on existing. GraphCalc allows you to graph 2D and 3D functions and equations as well as find intersects and create table values. Video of the Day. Management uses these cost curves to plan operations and calculate key ratios like the break-even point in units. The bond yields used on this calculator are not real time, however they are updated daily since yields move up and down with actual market conditions. This technique was. Our experts have been helping you master your money for four decades. Therefore, LTC envelopes the STC curves. Marginal cost is equal to the average cost when the marginal cost is minimum. Learning Curve Percent or the improvement rate. They are mostly standard functions written as you might expect. Lists: Plotting a List of Points. This calculator works seamlessly with the in-game appraisal system, letting you directly enter the information that you get from the game. A bell curve (also known as normal distribution curve) is a way to plot and analyze data that looks like a bell curve. Lecture 13 Cost Functions Outline 1. [email protected] The Centers for Disease Control and Prevention developed the Chronic Disease Cost Calculator (CDCC), which estimates state-level costs for arthritis, asthma, cancer, congestive heart failure, coronary heart disease, hypertension, stroke, other heart diseases, depression. Semester Credit Points will be calculated automatically. ADA Curb Ramp Calculator. You can see this in Figure 4, where Demand Curve 2 differs from Demand Curve 1, from Figure 1. It indicates an incremental cost change. T is the remaining time to maturity. Formula - How to Calculate Marginal Cost. With the data, a curve is drawn, starting with the lowest-cost plant at the left and progressing through the higher costs plants. The page includes a nice calculator for estimating price reduction after X units. Operations > Time-Cost. In simple words, an isocost line represents a combination of inputs which all cost the same amount. Peru 1986-1991 Total hyperinflation of 1,000,000,000 to 1. Simple S-Curve Posted by Ally ⋅ August 27, 2011 ⋅ 6 Comments Filed Under Business Apps , Business tools , Innovation Management , management apps , Management tools , Market Penetration , new product innovation , S-curve , S-curve using Excel , Simple S-curve. So when total cost is 34Q3 - 24Q + 9, marginal cost is 102Q2 - 24, and when total cost is Q + log(Q+2), marginal cost is 1 + 1/(Q+2). The total variable cost curve illustrates the graphical relation between total variable cost and the quantity of output produced. I thought about taking a basic circumference which would yield: C= 2 pi r = 2 pi (2) = 4 pi <<<<0 or decay rate when r<0, in percent. The program generates a full listing of criterion values and coordinates of the ROC curve. Post Views: 3,989. Actual costs will depend on job size, conditions, and options. To use the curves, estimate construction costs for a project. Basic Commands (click for demo) Number shortcuts. 1tg2kqvn9981, t1ulns66x6u, bm2telb25rcei7a, 3ibrzaitugjg308, 67tiyjyktk2ge, ur5xg2cwzp4737, ivo8t9h81a4ft6d, ia1yqd7eob8t, kqhbwzdl9du6s5, sy0bkmr2yyzga, x6vv7v55uso, islut19bww2ncpf, qzv9dnnnv6p, 8c4qr5pabfpf, v5nvr6530bify, h6icv7z0oqs, glb8lj869q, uo9cgkm3wso1, x1ap3k1hqulove, 22amaceanrl0, 0zlxw8z6j3bws9, 3oos11dhjtt2dsk, y3t5bnkd0r9vloq, leoqfzsxa8s9bri, 3mxwfx11fnljw, ec7vb1vnp4zxmo6, 2pmot8cvn8br1c, cvsqhuerby, lo0mj52n91ezva2 |
# Relative frequency
Relative is an estimate of and is calculated from repeated trials of an experiment.
The theoretical probability of getting a head when you flip a coin is , but if a coin was actually flipped 100 times you may not get exactly 50 heads, although it should be close to this amount.
If a coin was flipped a hundred times, the amount of times a head actually did appear would be the relative frequency, so if there were 59 heads and 41 tails the relative frequency of flipping a head would be (or 0.59 or 59%).
Relative frequency is used to estimate probability when theoretical probability cannot be used.
For example, when using a die, the probability of getting each number is no longer . To be able to assign a probability to each number, an experiment would need to be conducted. From the results of the trials, the relative frequency could be calculated.
The more trials in the experiment, the more reliable the relative frequency is as an estimate of the probability.
### Example
Ella rolls a biased die and records how many times she scores a six. Estimate the probability of scoring a six with Ella’s die.
Number of rolls (trials) Total number of sixes 10 20 30 40 50 2 3 6 8 9
Ella’s results will give different estimates of the probability, depending on the number of rolls of the die (the number of trials).
For example, after 10 rolls, the estimate for the probability of scoring a six is , but after 20 rolls the estimate is .
The most reliable estimate of the probability is found by using the highest number of rolls, which gives . |
# How to optimize my CSS/HTML webpage?
I am working on a (HTML/CSS) website to improve my skills but I am not sure if I am doing it the right way, because I have a lot of floats and sizes. I also need help with some CSS things:
What I have:
What I am Shooting for
The red dimensions in the image are the dimensions I've tried to give the objects and which I am not sure if it is the correct way of doing it. The black words are the things I would like to change in the future, but I need this code reviewed first.
All my code: INDEX.HTML
<div id="wrapper">
<div id="mainContent">
<div id="newsHolder">
<div id="item1">
<img src="img/item1.jpg">
</div>
<div id="item2">
<img src="img/item2.jpg">
</div>
<div id="item5">
<img src="img/item3.jpg">
</div>
<div id="item3">
<img src="img/item4.jpg">
</div>
<div id="item4">
<img src="img/item5.jpg">
</div>
</div>
</div>
<div id="sidebar-right">
<p>sidebar</p>
</div>
<div id="newsList">
<p> Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.</p>
</div>
</div>
<!--<div id="footer"></div>-->
STYLE.CSS
@import url(reset.css);
body {
background:#f1f1f1;
}
float:left;
min-width:100%;
height:90px;
}
min-width:100%;
height:60px;
background:#B52B42;
}
width:1180px;
height:60px;
margin:0 auto;
}
#home {
float: left;
width: 140px;
}
float: left;
width: 1180px;
}
#wrapper {
width:1180px;
margin:0 auto;
}
#mainContent {
width:100%;
float:left;
background-color:#FFF;
}
#newsHolder {
float:left;
width:840px;
height:493px;
}
#item1 {
float:left;
width:477px;
height:320px;
margin-bottom:5px;
}
#item2 {
float:right;
width:358px;
height:244px;
margin-bottom:5px;
}
#item3 {
float:left;
width:236px;
height:167px;
margin-right:5px;
}
#item4 {
float:left;
width:236px;
height:167px;
}
#item5 {
float:right;
width:358px;
height:244px;
}
#item1 img,#item2 img,#item3 img,#item4 img,#item5 img {
width:100%;
height:100%;
}
#sidebar-right {
float:right;
width:340px;
background:#FFF;
}
#newsList {
float:left;
width: 840px;
background:#FFF;
}
#footer {
float:left;
min-width:100%;
height:70px;
background:#006;
}
I did not post the CSS code of the navigation menu because it is already working correctly.
I would be very happy if anyone can help me make my HTML/CSS cleaner
• Code review is not for "fixing HTML/CSS layout to look correctly". Code Review is essentially about having working and correct code, then rewriting parts of that code so that you still get the same result but in a better way. Your question is therefore off-topic because it's about fixing a problem. – Simon Forsberg Dec 16 '13 at 15:22
• Keep your CSS as DRY as Possible. you are repeating your width several times. item 3 & 4 differ only by a margin-right 2 & 5 by a margin-bottom you could add a class to 2 & 5 and then add another class for the margin, I hope that makes sense. then if you want another row you can just repeat. you shouldn't use ID's for those items, because then you can't use that style again. – Malachi Dec 16 '13 at 16:08
• you should provide all the html, I don't know what is going on with your Headers, but the CSS for them looks wrong. you should apply the width to the outermost element and then 100% the elements inside that you want to be the same width. – Malachi Dec 16 '13 at 16:09
instead of using ID's you should be using classes. like this
<div id="wrapper">
<div id="mainContent">
<div id="newsHolder">
<div class="item1">
<img src="img/item1.jpg">
</div>
<div class="item2">
<img src="img/item2.jpg">
</div>
<div class="item5">
<img src="img/item3.jpg">
</div>
<div class="item3">
<img src="img/item4.jpg">
</div>
<div class="item4">
<img src="img/item5.jpg">
</div>
</div>
</div>
<div id="sidebar-right">
<p>sidebar</p>
</div>
<div id="newsList">
<p> Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book.</p>
</div>
</div>
<!--<div id="footer"></div>-->
then you could change it a little bit more because you have the same exact CSS for 3 & 4 so I would change those to something like this.
<div class="smallLeft rtMargin"> <!-- Generic Class Names -->
<img src="img/item4.jpg">
</div>
<div class="smallLeft">
<img src="img/item5.jpg">
</div>
#smallLeft {
float:left;
width:236px;
height:167px;
}
#rtMargin {
margin-right:5px;
}
same concept for items 2 & 5, and remember that CSS executes top to bottom and will overwrite itself if you change an element with two different selectors in different CSS documents or even on the same document it will overwrite the style, the last one will prevail (in that instance)
## Something from my Comment
you should provide all the html, I don't know what is going on with your Headers, but the CSS for them looks wrong. you should apply the width to the outermost element and then 100% the elements inside that you want to be the same width. – Malachi 1 hour ago |
# Fibonacci Nim
## Fibonacci Numbers
1,1,2,3,5,8,13,21,34,55,89: what's next?
If you guessed 144, you may be on to something we call the Fibonacci numbers, after a mathematician who lived about 1200 A.D..
The Fibonacci numbers are generally defined this way: given the two latest Fibonacci numbers, create the next (and newest) Fibonacci by adding them together. So 1+1=2, 1+2=3, 2+3=5, etc. We explain this all mathematically via the recursive algorithm that follows:
F(1)=1
F(2)=1
F(n)=F(n-1)+F(n-2), ${\displaystyle n\geq 3}$
These numbers appear all over in nature: in pinecones, pineapples, artichokes, pussywillows, daisys, etc.
But they started out as pairs of immortal bunnies, which mature in a month, and then produce a single new pair every month thereafter (that's the way Fibonacci began studying them, it seems). We start out on the left, in the diagram which follows, and watch a single baby bunny pair mature, and then begin to populate the world with bunny pairs:
This diagram illustrates why the recursive algorithm works:
• we assume that the number of bunny pairs arising from a single immature pair is given by the sequence ${\displaystyle \left.F(n)\right.}$, with generations 1, 2, 3, 4, 5 shown here (although this will continue on forever, of course).
• We notice that at the third generation, the total population of pairs is given by the sum of two new populations: a perfect copy of the original tree, starting from the first generation, and a perfect copy of the original tree starting from the second generation.
• Hence ${\displaystyle \left.F(3)=F(2)+F(1)\right.}$, ${\displaystyle \left.F(4)=F(3)+F(2)\right.}$, and so on. So, more generally,
${\displaystyle \left.F(n)=F(n-1)+F(n-2)\right.}$
## Fibonacci number decomposition
We "decompose" natural numbers using Fibonacci numbers and sums:
every natural number is either
1. Fibonacci, or
2. can be written as a sum of non-consecutive Fibonacci's in a unique way.
Examples?
Can we justify this?
• If a number is Fibonacci, then we're done.
• Assume a number is not Fibonacci:
• then there is a largest Fibonacci that "fits inside it" -- e.g. 24 = 21 + 3.
• If the remainder (3, in the example above), after subtraction, is Fibonacci, how do we know that it is not consecutive (that is, that it is not 13)? That is, how do we know that the two Fibonacci numbers are not successive Fibonacci numbers?
Because if they were, they could be combined to form a larger Fibonacci number! And we chose the largest Fibonacci possible to start.
• If it is not Fibonacci, can we not simply repeat the current process of looking for a sum for a non-Fibonacci number, but using the remainder instead of the original number? (i.e. can we not simply "recurse" -- that is, do it again, and so construct a chain of numbers leading down towards 1. Example: 33=21+12=21+8+4=21+8+3+1
## Fibonacci Nim
• Rules:
• Two players, n sticks (coins/bottlecaps/etc)
• First player must take anywhere from 1 to (n-1) sticks
• From then on, the next player may take from 1 up to twice what the previous player took.
• Winner is the player who picks up the last stick
• Is there a winning strategy?
• Is there a perfect defensive strategy?
• Let's look at the early examples: If you looked at some special cases, what can we conclude?
Number of sticks n 1 2 3 4 5 6 7 8 Winner X P2 P2 P1 P2 P1 P1 P2
It looks like going first on a non-fibonacci number puts you in the driver seat, whereas being second on a Fibonacci is the place to be. |
Defending Macros
C、C++ 专栏收录该内容
8 篇文章 0 订阅
Defending Macros
by Steve Donovan, the Author of C++ by Example: "UnderC" Learning Edition
MAR 15, 2002
The C++ preprocessor comes from the C legacy that many supporters of the language want to go away. The preprocessor step does exactly that; it compensates for some of the deficiencies of C by working on the source text, replacing symbolic constants by numbers, etc. Because these deficiencies have mostly been addressed, the separate preprocessing step seems very old-fashioned and inelegant. But the inclusion of files, conditional compilation, and so on depend on the preprocessor so strongly that no proposal to retire it has been successful.
I think most people are in agreement that inline functions and constants are much better than macros. The basic problem is that the preprocessor actually does text substitution on what the compiler is going to see. So macro-defined constants appear to the compiler as plain numbers. If I define PI in the old-fashioned way, as follows, then subsequently the debugger will have no knowledge of this symbolic constant, because it literally did not see it:
#define PI 3.1412
Simple-minded text substitution can easily cause havoc and is not saved by putting in parentheses:
#define SQR1(x) x*x
#define SQR2(x) (x)*(x)
...
SQR2(1+y) => (1+y)*(1+y) ok
SQR2(sin(x)) => (sin(x))*(sin(x)) ok - eval. twice
SQR2(i++) => (i++)*(i++) bad - eval. twice!
This vulnerability to side effects is the most serious problem affecting all macros. (Yes, I know the traditional hack in this case, but it has a serious weakness.) Inline functions do a much better job, and are just as fast. Again, once macros get any larger than SQR, then you are lost if you are trying to browse for a macro symbol or trace through a macro call. Debuggability and browsability are often unappreciated qualities when evaluating coding idioms; in this case, they agree that macros stink.
Another serious problem with macros is related to their lack of browsability; they are completely outside the C++ scoping system. You never know when a macro is going to clobber your program text in some strange way. (Potential namespace pollution problems are nothing compared to this one!) So, a naming convention is essential; any macros must be in uppercase and have at least one argument, so they don't conflict with any symbolic constants. The only type-checking you have with macros will be for the number of arguments, so it's wise to use this. Of course, macros also have an important role to play in conditional compilation, so such symbols must also be distinctly named (initial underscores are useful).
I have rehashed all these old issues so you can appreciate the strict limits we must place on any useful macro. The first one I'll introduce is a modified version of the FOR macro:
#define FOR(i,n) for(int i = 0, _ct = (n); i < _ct; i++)
This is free of the side-effect problem because a temporary variable is used to contain the loop count. (I am assuming that the compiler has the proper scoping for variables declared like this! Both i and _ct must be private to the loop. The Microsoft compiler will finally be compliant on this irritating item this year.) If n is a constant, then a good compiler will eliminate the local variable, so no penalty for correctness is necessary here.
My argument is that using FOR consistently leads to fewer errors and improved code readability. One problem with typing the for-statement is that the loop variable is repeated three times, so mistakes happen. My favorite is typing an i instead of a j in a nested loop:
for(int i = 0; i < n; i++)
for(int j = 0; i < m; j++)
...
Note here that slight differences are invisible in the lexical noise. If I see FOR(k,m), then I know this is a normal k = 0..m-1 loop, whereas if I see a for-statement I know it deviates from this pattern (like for(k = 0; k <= m; k++)). So, exceptional cases are made more visible. I find it entertaining that these statement macros are actually safer in standard C++ because you can define local loop variables.
In my article called "Overdoing Templates," I point out that C++ is not good at internal iterators. Here is a typical situation:
void has_expired(Shape *ps)
{ return ps->modified_time() < expiry_time; }
...
int cnt =
std::count_if(lsl.begin(),lsl.end(),has_expired);
Compare this with
list<Shape *>::iterator sli;
int cnt = 0;
for(sli = lsl.begin(); sli != lsl.end(); ++sli)
if ((*sli)->modified_time() < expiry_time) ++cnt;
This is much better behaved; the condition within the loop is explicit, and the scope of expiry_time (which is effectively global in the first version) can be local. The explicit declaration of an iterator does make this more verbose, however, and it's good practice to use typedefs (such as 'ShapeList') here.
I want to introduce a few candidate statement macros to make this common pattern even easier on the eye. First, let me introduce GCC's typeof operator, which makes a lot of template trickery quite straightforward. It takes an expression (which, as with sizeof, is not evaluated), and deduces its type, and can be used in declarations wherever a type is required:
double x1 = 2.3;
typeof(x) x2 = x1;
typeof(&x) px = &x1;
I'm presenting typeof as a part of C++ because Bjarne Stroustrup would like to see it included in the next revision of the standard, and it's a cool feature that needs every vote it can get. With it, I can write the FORALL statement macro, and express our example more simply:
#define FORALL(it,c) /
for(typeof((c).begin()) it = (c).begin(); /
it != (c).end(); ++it)
...
int cnt = 0;
FORALL(sli,ls)
if ((*sli)->modified_time() < expiry_time) ++cnt;
This statement macro works with any container-like object; that is, any type that defines an iterator and begin()/end(). But it suffers from the side-effect problem. It should not be called when the container argument is some non-trivial expression, such as a function call, because that expression must be evaluated for each iteration of the loop. And there is no way to enforce that restriction, because macros are too dumb. So FORALL does not meet our criterion as a "safe" statement macro. Besides, it simply cannot be expressed in the standard language.
A better candidate is FOR_EACH, which is a construct that is found in many languages. For instance, AWK has 'for(i in array)' for iterating over all keys in an associative array, and Visual Basic (and now C#) has FOR EACH. I will show that FOR_EACH is a much better-behaved statement macro, and it can in fact be implemented using the standard language, although not so efficiently. This is what I want to be able to say:
int cnt = 0;
Shape *ps;
FOR_EACH(ps,ls)
if (ps->modified_time() < expiry_time) ++cnt;
Note that this form makes it hard to accidentally modify the list, and as a bonus, is rather more debuggable. I bring this up because debugging code using the standard containers can be frustrating. If sli is an iterator, then *sli is the value—but the built-in expression evaluators in gdb and Visual Studio can't understand these smart pointers.
The FOR_EACH construct needs a special kind of iterator that binds a variable reference to each object in turn. When we ask this iterator for the next element, it assigns the next value to the variable reference. Eventually, it signals to the caller that there are no more elements in the collection, and the loop can terminate. The implementation using typeof follows:
// foreach.h
template <class C, class T>
struct _ForEach {
typename C::iterator m_it,m_end;
T& m_var;
_ForEach(C& c, T& t) : m_var(t)
{ m_it = c.begin(); m_end = c.end(); }
bool get() {
bool res = m_it != m_end;
if (res) m_var = *m_it;
return res;
}
void next() { ++m_it; }
};
#define FOR_EACH(v,c) /
for(_ForEach<typeof(c),typeof(v)> _fe(c,v); /
_fe.get(); _fe.next())
The ForEach constructor requires two things: a reference and a container-like object. These are only evaluated once as arguments, so there are no side effects. So FOR_EACH is valid in a number of contexts. Please note that it is better for containers of pointers or small objects because copying of each element takes place in turn.
The typeof operator is essential here because template classes will not deduce their types from their constructor arguments. But you can actually implement FOR_EACH without typeof, using the fact that function templates can deduce their argument types. However, you cannot declare the concrete type, so it must be derived from an abstract base and then created dynamically. The listing can be found here.
int i;
string s = "hello";
// gives 104 101 108 108 111
FOR_EACH(i,s) cout << i << ' ';
list<string> lss;
...
FOR_EACH(s,lss) ... // may involve excessive copying!
How efficient is FOR_EACH? Tests with iterating through a list show that the typeof version is only about 20% slower than the explicit loop because the reference iterator code can be easily inlined. The standard version is nearly three times slower because of the virtual method calls. Even so, in a real application, its use would most likely have no discernible effect on the total run time.
A serious criticism of statement macros is that they allow people to invent their own private language that ends up being less readable and maintainable. However, in a large project, programmers will fashion an appropriate idiom for the job in hand; hopefully, they leave documentation about their choices. You certainly don't need the preprocessor to generate a private language. One or two new control constructs can be introduced on a per-case basis without affecting readability adversely. I'm not suggesting that programmers should be given carte blanche to make their C++ look like Basic or Algol 68, but a case can be made for using statement macros to improve code readability. This is particularly true for the more informal code that gets generated in interactive exploration and test frameworking.
Macros still remain outside of the language, and for this reason, I don't expect much support on this modest position. It is interesting to speculate about what extra features C++ would need to support these custom control structures. Here is what a statement template might look like:
template <class T, class S>
__statement FOR(T t, S e) for(int t = 0; t < e; t++)
It would still probably involve a lexical substitution, but done by the compiler, not the preprocessor. FOR is now a proper C++ symbol and can be properly scoped. Most importantly, potential side effects would automatically be eliminated because e will only be evaluated once. The macro FORALL can now safely be defined. Here is an example that cannot be done reliably using the preprocessor; an alternative implementation of Bjarne Stroustrup's idea of input sequences.
template <class C>
__statement iseq(C c) c.begin(), c.end()
....
copy(iseq(ls),array);
If lexical substitution were more closely integrated into the language, then the preprocessor could finally be retired after a long and curious career.
< p>
• 0
点赞
• 0
评论
• 0
收藏
• 一键三连
• 扫一扫,分享海报
10-12 900
10-03 155
03-30 1497
05-09 253
12-18 1058
08-16 680
12-02 942
04-19 921
04-22 898
04-22 864
12-02 808
04-19 805 |
# Using BurpSuite with qutebrowser
Some time ago I switched to qutebrowser, a keyboard-driven browser based on QtWebEngine. Thus, I had to adapt my BurpSuite setup for WebApp pentesting.
When pentesting web applications, a MITM proxy to log HTTP(S) requests is a necessity. Although open-source alternatives exist, PortSwigger’s BurpSuite is the de-facto standard in this niche.
# Certificate Installation
To be able to MITM TLS-encrypted connections without certificate errors, you first need to install Burp’s locally generated CA certificate.
Like Chromium and Firefox, qutebrowser checks the user-local NSS Database at ~/.pki/nssdb/ for certificates. Using certutil, you can install the certificate like this:
$certutil -d "sql:$HOME/.pki/nssdb" -A -i ~/Downloads/cacert.der -n "Burp Suite CA" -t C,,
# Proxy Setup in Qutebrowser
Next thing you’ll need is a proxy setup for qutebrowser. A proxy can easily be set using:
:set content.proxy http://127.0.0.1:8080/
In order to enable and disable “burp-mode” faster, you can use aliases:
:set aliases '{ "burp": "set content.proxy http://127.0.0.1:8080/", "noburp": "set content.proxy system" }'
Now you can simply type
:burp
to start sending the requests via the proxy.
When you type
:noburp
the browser will use the system proxy again.
# EFA Departure Monitor on the command line
I just hacked together a small shell script that gets departures from my local public transportation service. It will list upcoming departures at a stop. You can find the script here.
## Getting Started
You can either use the stop name:
$./efa-dm.sh "Dortmund Hbf" Or you can use the stop ID: $ ./efa-dm.sh 20000131
By default, it will simply print upcoming departures as a tab-separated list.
$./efa-dm.sh -n 3 "Essen Rüttenscheider Stern" 1 2018 3 5 16 20 2018 3 5 16 21 101 Essen Helenenstr. 2 2018 3 5 16 21 2018 3 5 16 22 108 Essen Altenessen Bf Schleife 2 2018 3 5 16 22 U11 Essen Messe W.-Süd/Gruga Column Meaning 1 minutes left until departure 2 scheduled departure (year) 3 scheduled departure (month) 4 scheduled departure (day) 5 scheduled departure (hour) 6 scheduled departure (minute) 6 predicted actual departure (year) 8 predicted actual departure (month) 9 predicted actual departure (day) 10 predicted actual departure (hour) 11 predicted actual departure (minute) 12 line name 13 direction You can also use the -p flag to get the pretty-printed version: $ ./efa-dm.sh -p -n 3 "Essen Rüttenscheider Stern"
16:20(+1) 101 Essen Helenenstr. in 1 min
16:21 108 Essen Altenessen Bf Schleife in 1 min
16:22 U11 Essen Messe W.-Süd/Gruga in 2 min
## Usage
Here’s a list of all command line options:
$./efa-dm.sh -h Usage: ./efa-dm.sh [-p] [-d] [-a <API_ENDPOINT>] [-n <NUM_DEPARTS>] [-t <TIME_OFFSET>] <STOP_NAME> Options: -h Show this help -p Petty-printed output (instead of tab-separated values) -d Debug mode (output server reply and exit) -a <API_ENDPOINT> Use API endpoint at this URL -n <NUM_DEPARTS> Limit the number of departures (default: 8) -t <TIME_OFFSET> Skip departures in next X minutes (default: 0) ## Services that use EFA Here’s a list of other public transportation services that also use the Elektronische Fahrplanauskunft (EFA) system and thus can also by queried by the script as well: • Verkehrsverbund Rhein-Ruhr (VRR), Germany http://efa.vrr.de/standard/XSLT_DM_REQUEST • Verkehrs- und Tarifverbund Stuttgart (VVS), Germany http://www2.vvs.de/vvs/XSLT_DM_REQUEST • Münchner Verkehrs- und Tarifverbund (MVV), Germany http://efa.mvv-muenchen.de/mobile/XSLT_DM_REQUEST • Nahverkehrsgesellschaft Baden-Württemberg (NVBW), Germany http://www.efa-bw.de/nvbw/XSLT_DM_REQUEST • Regional Transportation Authority (RTA) Chicago, USA http://tripplanner.rtachicago.com/ccg3/XSLT_DM_REQUEST ## Departure Monitor in Polybar You can easily use this script as a polybar module: [module/efa1] type = custom/script exec = /path/to/efa-dm.sh -p -t 4 -n 1 "Essen Hbf" format = <label> ; In case this is a bus stop: ; format = <label> ; For a subway station: ; format = <label> interval = 60 # Fixing WiFi Multicast Flooding in bridged networks I’m using MPD and PulseAudio’s RTP multicasting to get a seamless multi-room audio experience. Unfortunately, if you’re using a network bridge to connect your wired and wireless LAN, using multicast RTP might have unintended consequences: All WiFi clients are flooded with multicast traffic, which can bring down the entire wireless network. When multicast transmission arrives at the receiver’s LAN, it is flooded to every Ethernet switch port unless flooding reduction such as IGMP snooping is employed (Section 2.7). (RFC 5110, Section 2 “Multicast Routing”, page 4) If you don’t wanto to set up IGMP snooping, you have two alternatives: You can either 1. un-bridge Ethernet and WiFi interfaces and switch to a routed approach, or 2. filter out multicast packets on their way from wired interface to wireless. Since (1) has other implications that I’d rather avoid (e.g. blocking broadcast traffic, too, so that service autodiscovery won’t work anymore), so I chose the second approach. This can easily be archieved using ebtables, which allow link layer filtering on Linux bridge interfaces. My router is running OpenWRT, which does not with ebtables by default, so it needs to be installed first: # opkg update # opkg install ebtables This is how my bridge setup looks like: # brctl show bridge name bridge id STP enabled interfaces br-lan 7fff.12345678abcd no eth0.1 wlan0 wlan1 br-wan 7fff.12345678abcd no eth0.2 eth0.1, wlan0 and wlan1 are bridged. It’s a dual band router that has wifi interfaces for both the 2.4 GHz (wlan0) and the 5 GHz band (wlan1). Now the filter rules need to be added. One rule for each wifi interface is necessary: # ebtables -A FORWARD -o wlan0 -d Multicast -j DROP # ebtables -A FORWARD -o wlan1 -d Multicast -j DROP These rules tell ebtables to drop all Multicast packets if their output device in either wlan0 or wlan1. The effect is immediately noticeable. Before setting up multicast filtering the wifi interfaces were quite busy: Afterwards, there’s a lot less going on: To make the filtering permanent, simply add the ebtables commands to /etc/firewall.user. # Upgrading iLO 4 on a HPE ProLiant MicroServer from Linux I recently got my hands on a ProLiant MicroServer Gen8 by Hewlett Packard Enterprise (HPE). As I always do when setting up a server I checked if the device needs a firmware upgrade. And indeed it did: It’s version of Integrated Lights-Out (iLO) 4, its built-in server provisioning and management software, is affected by CVE-2017-12542, which is a solid 10.0 on the CVSS 2.0 score chart. So I decided to update it. Fortunately, the iLO web interface has a page where firmware upgrades can be uploaded. Since it’s in an isolated network, using the web interface should not pose a security problem. On the other hand, locating the proper firmware file to upload was not as easy as it should be. It’s Hewlett-Packard, after all. In case someone else is looking for the iLO 4 *.bin file, here’s what I did: 1. Visit the iLO 4 support page, but do not select OS-Independent (it’s not in there). Select “Red Hat Enterprise Linux 7” instead (direct link) 2. Open the “Firmware - LOM (Lights-Out Management)” section and download hp-firmware-ilo4-2.55-1.1.i386.rpm. 3. To extract the actual firmware file from the RPM, use this command: $ rpm2cpio hp-firmware-ilo4-2.55-1.1.i386.rpm | bsdtar -x -s'|.*/||' -f - ./usr/lib/i386-linux-gnu/hp-firmware-ilo4-2.55-1.1/ilo4_255.bin
The resulting file (ìlo4_255.bin) can then be uploaded to the web interface:
After the upgrade process finishes, you’ll be redirected to the brand new login screen:
# Generating syntax diagrams using the LaTeX rail package
If you ever had the need to add syntax specifications to your document, you basically have two options: Either write down the syntax in the Backus-Naur form (BNF) (or one of its derivatives) or opt for a more graphical approach by adding “railroad diagrams”. In my opinon, the latter are easier to grasp for less experienced readers and also look quite nice.
In LaTeX, you can use the rail package to generate those diagrams from EBNF rules:
\begin{rail}
decl : 'def' identifier '=' ( expression + ';' )
| 'type' identifier '=' type
;
\end{rail}
This will result in something like this:
To archieve this, the package first generates a *.rai file. We then have to convert the rai file to a *.rao by invoking the accompanying C program named rail.
However, the rail package is fairly old. It has been written by Luc Rooijakkers in 1991 (!) and was updated by Klaus Barthelmann until 1998. Thus, the code is – at least – 19 years old and that really shows: Trying to compile it on modern systems yields a bunch of compilation errors.
Most of the issues stem from missing return types in function declarations and also missing forward declarations. I stepped up and fixed these issues, so that it works with a up-to-date compiler (I tested with gcc (GCC) 6.3.1 on Arch Linux. You can find the result on Github.
I also threw in some Makefile improvements into the mix: You can now use DESTDIR and PREFIX (defaults to /usr/local) when running make install.
## Installation
Installation should be fairly straighforward. Here’s an example which will install rail into /usr:
$curl -L https://github.com/Holzhaus/latex-rail/archive/v1.2.1.tar.gz | tar xzvf -$ cd latex-rail-1.2.1
$make bison -y -dv gram.y gram.y: warning: 2 reduce/reduce conflicts [-Wconflicts-rr] cmp -s gram.c y.tab.c || cp y.tab.c gram.c cmp -s gram.h y.tab.h || cp y.tab.h gram.h gcc -DYYDEBUG -O -c -o rail.o rail.c gcc -DYYDEBUG -O -c -o gram.o gram.c flex -t lex.l > lex.c gcc -DYYDEBUG -O -c -o lex.o lex.c gcc -DYYDEBUG -O rail.o gram.o lex.o -o rail$ sudo make PREFIX=/usr install
$sudo mktexlsr Please note that installing stuff using sudo make install will circumvent your package manager and is usually not a good idea. If you’re using Arch Linux you should use the AUR package instead: $ pacaur -S latex-rail
## Manual compilation and Latexmk support
To generate a document manually, you need to run multiple commands:
1. Run latex mydoc, which will create mydoc.rai
2. Run rail mydoc to generate mydoc.rao from mydoc.rai
3. Run latex mydoc for the final document
If you don’t want to bother with running LaTeX multiple times, you can use latexmk, a perl script to automate the document generation.
To make it work with the rail package, you should create a .latexmkrc in your document folder with this content:
push @file_not_found, '^Package .* Info: No file (.+) on input line \d+\.';
add_cus_dep('rai', 'rao', 0, 'rail');
sub rail {
my ($base_name,$path, $ext) = fileparse($_[0], qr/\.[^.\/]*/ );
pushd $path; my$return = system "rail $base_name"; popd; return$return;
}
The first line will add the appropriate RegEx to Latexmk’s missing file detection, the second line will instruct latexmk to run the rail subroutine with a *.rai file as input and *.rao file as output.
## Alternatives
I you don’t quite like the rail package, you might want to look into one of these alternative packages:
These also an online tool to generate railroad diagrams if you don’t want to do it in LaTeX. |
# ExternalEvaluate[pySys, pyCmd] evaluates to Failure[…]
ExternalEvaluate[pySys, pyCmd] evaluates to Failure[...] every time pyCmd contains some import statement (otherwise it just works).
The result says it is a Python-related error:
TypeError required field "type_ignores" missing from Module
The "problematic" files seem to be:
C:\Program Files\Wolfram Research\Mathematica\12.0\SystemFiles\Links\WolframClientForPython\wolframclient\language\decorators.py
C:\Program Files\Wolfram Research\Mathematica\12.0\SystemFiles\Links\WolframClientForPython\wolframclient\utils\externalevaluate.py
Both Mathematica 12.0.0 and Python 3.8.1 are fresh installations.
EDIT It is indeed a problem with Python 3.8.
Also, in:
C:\Program Files\Wolfram Research\Mathematica\12.0\SystemFiles\Links\WolframClientForPython\setup.py
it is clearly visible that 3.8 is not """officially""" supported yet.
However, see the accepted answer for a clever workaround.
TL;DR The problem is not present as long as you stick to Python <= 3.7.x, but Michael Himbeault did an awesome job making 3.8.x work as well.
## 1 Answer
Reading this, turns out the patch is a one-line fix to enable Python 3.8 support. I haven't tested it extensively, but I've included it in my GitHub gist that also lets you use Python runtimes installed from the Microsoft Store.
Patch reproduced here directly (applied to 12.0.0) for C:\Program Files\Wolfram Research\Mathematica\12.0\SystemFiles\Links\WolframClientForPython\wolframclient\utils\externalevaluate.py:
66c66
< exec(compile(ast.Module(expressions, []), '', 'exec'), current)
---
> exec(compile(ast.Module(expressions), '', 'exec'), current)
• Hi, thank you! As soon as I get to my laptop I'll try your fix ;) if it works on my machine (of course it should, but for the sake of full transparency I'll wait until I actually try) I'll accept your answer! – Spaghetti Jan 16 at 7:52
• For the same situation of Mathematica 12.0.0 and Python 3.8.1 in MacOS, this solution simply solved the problem. Great. – Joo-Haeng Lee May 17 at 2:01 |
• The ALICE Collaboration has measured inclusive J/psi production in pp collisions at a center of mass energy sqrt(s)=2.76 TeV at the LHC. The results presented in this Letter refer to the rapidity ranges |y|<0.9 and 2.5<y<4 and have been obtained by measuring the electron and muon pair decay channels, respectively. The integrated luminosities for the two channels are L^e_int=1.1 nb^-1 and L^mu_int=19.9 nb^-1, and the corresponding signal statistics are N_J/psi^e+e-=59 +/- 14 and N_J/psi^mu+mu-=1364 +/- 53. We present dsigma_J/psi/dy for the two rapidity regions under study and, for the forward-y range, d^2sigma_J/psi/dydp_t in the transverse momentum domain 0<p_t<8 GeV/c. The results are compared with previously published results at sqrt(s)=7 TeV and with theoretical calculations.
• ### System-size and centrality dependence of charged kaon and pion production in nucleus-nucleus collisions at 40A GeV and158A GeV beam energy(1207.0348)
July 2, 2012 nucl-ex
Measurements of charged pion and kaon production are presented in centrality selected Pb+Pb collisions at 40A GeV and 158A GeV beam energy as well as in semi-central C+C and Si+Si interactions at 40A GeV. Transverse mass spectra, rapidity spectra and total yields are determined as a function of centrality. The system-size and centrality dependence of relative strangeness production in nucleus-nucleus collisions at 40A GeV and 158A GeV beam energy are derived from the data presented here and published data for C+C and Si+Si collisions at 158A GeV beam energy. At both energies a steep increase with centrality is observed for small systems followed by a weak rise or even saturation for higher centralities. This behavior is compared to calculations using transport models (UrQMD and HSD), a percolation model and the core-corona approach.
• ### Energy dependence of phi meson production in central Pb+Pb collisions at sqrt(s_nn) = 6 to 17 GeV(0806.1937)
Oct. 27, 2008 nucl-ex
Phi meson production is studied by the NA49 Collaboration in central Pb+Pb collisions at 20A, 30A, 40A, 80A and 158A GeV beam energy. The data are compared with measurements at lower and higher energies and to microscopic and thermal models. The energy dependence of yields and spectral distributions is compatible with the assumption that partonic degrees of freedom set in at low SPS energies.
• ### Event-by-event transverse momentum fluctuations in nuclear collisions at CERN SPS(0707.4608)
Feb. 28, 2008 nucl-ex
The latest NA49 results on event-by-event transverse momentum fluctuations are presented for central Pb+Pb interactions over the whole SPS energy range (20A - 158A GeV). Two different methods are applied: evaluating the $\Phi_{p_{T}}$ fluctuation measure and studying two-particle transverse momentum correlations. The obtained results are compared to predictions of the UrQMD model. The results on the energy dependence are compared to the NA49 data on the system size dependence. The NA61 (SHINE, NA49-future) strategy of searching of the QCD critical end-point is also discussed. |
Lachlan's question via email about the Bisection Method
Prove It
Well-known member
MHB Math Helper
Consider the equation $\displaystyle 8\cos{\left( x \right) } = \mathrm{e}^{-x/7}$.
Perform four iterations of the Bisection Method to find an approximate solution in the interval $\displaystyle x \in \left[ 1.35, 1.6 \right]$.
The Bisection Method is used to solve equations of the form $\displaystyle f\left( x \right) = 0$, so we need to rewrite the equation as $\displaystyle 8\cos{ \left( x \right) } - \mathrm{e}^{-x/7} = 0$. Thus $\displaystyle f\left( x \right) = 8\cos{ \left( x \right) } - \mathrm{e}^{-x/7}$.
I have used my CAS to solve this problem. Note that the calculator must be in Radian mode.
So our solution is $\displaystyle x \approx c_4 = 1.45938$. |
# What is the cofficient of Cr^2+?
Apr 11, 2018
$2 C {r}^{2 +}$
#### Explanation:
Start by finding what has been oxidized and what has bee reduced by inspecting the oxidation numbers:
In this case:
$C {r}^{2 +} \left(a q\right) \to C {r}^{3 +} \left(a q\right)$
Is the oxidation
and
$S {O}_{4}^{2 -} \left(a q\right) \to {H}_{2} S {O}_{3} \left(a q\right)$
is the reduction
Start by balancing the half equations for oxygen by adding water:
$S {O}_{4}^{2 -} \left(a q\right) \to {H}_{2} S {O}_{3} \left(a q\right) + {H}_{2} O \left(l\right)$
(Only the reduction includes oxygen)
Now balance hydrogen by adding protons:
$4 {H}^{+} \left(a q\right) + S {O}_{4}^{2 -} \left(a q\right) \to {H}_{2} S {O}_{3} \left(a q\right) + {H}_{2} O \left(l\right)$
(again, only the reduction involves hydrogen)
Now balance each half-equation for charge by adding electrons to the more positive side:
$C {r}^{2 +} \to C {r}^{3 +} + {e}^{-}$
$4 {H}^{+} + S {O}_{4}^{2 -} \left(a q\right) + 2 {e}^{-} \to {H}_{2} S {O}_{3} \left(a q\right) + {H}_{2} O \left(l\right)$
And to equalize the electrons, multiply the whole half equation with the least electrons by an integer to equal the other half-equation in the number of electrons, thereby balancing the electrons on both sides of the equation:
$2 C {r}^{2 +} \to 2 C {r}^{3 +} + 2 {e}^{-}$
Now combine everything and remove the electrons (as they in equal amounts on both sides they can be canceled in this step - otherwise just simplify as far as possible)
$4 {H}^{+} \left(a q\right) + S {O}_{4}^{2 -} \left(a q\right) + 2 C {r}^{2 +} \left(a q\right) \to 2 C {r}^{3 +} \left(a q\right) + {H}_{2} S {O}_{3} \left(a q\right) + {H}_{2} O \left(l\right)$
Now the equation is balanced and we can see that the coefficient to $C {r}^{2 +}$ is 2. |
# Tag Info
1
Here's a proof why $l^p(\mathbb N)$ is not locally convex, this is just for simplicity, it can be easily generalized. If it would be locally convex, then the unit ball $B_1(0)$ would contain a convex neighborhood U of $0$. Then there must be $\delta>0$ with $B_{2\delta}(0)\subset U$, hence also $\mathrm{conv}(B_{2\delta}(0))\subset U\subset B_1(0)$. Let ...
3
The closure of the domain in $L^2$ is simply $L^2$: Obviously it holds $C_0^\infty(0,1)\subset D(A_0)$. The set of smooth function is dense in $L^2(0,1)$, hence its closure is $L^2(0,1)$. This implies that the closure of $D(A_0)$ is $L^2(0,1)$ as well.
3
Yes, every continuous semi-norm $p$ on $M$ can be extended to a continuous semi-norm on $X$: Let $U=\lbrace x\in M: p(x)<1\rbrace$ be the unit ball of $p$. Since $U$ is open in $M$ and $0\in U$ there is a convex $0$-neighbourhood $V$ in $X$ such that $V\cap M\subseteq U$. Now, let $W$ be the convex hull of $U\cup V$. Since $U$ and $V$ are convex we have ...
Top 50 recent answers are included |
# Temperature Control of an Alkaline Electrolyser
The chosen Hybrid system is a model aproximation to a temperature control used at ITBA’s Alkaline Electrolyser. Electrolysis is an exothermic process which produces heating of KOH solution. Intending to analyze process efficience as a function of temperature, it is considered a group of temperatures to make tests. With this results, relationships between power consumed, production of gases and their purity will be concluded.
To maintain system temperature $T_1$, it is considered natural convection $\dot{Q}_{NC}$ with environment temperature $T_0$ of the entire equipment, and dissipated power by a cooling system composed of a countercurrent flow of water and a radiator which also interacts with the environment, $\dot{Q}_{Cool}$. For this approximation it will be considered without the intermediate step of cooling water: $\dot{Q}_{Cool} = K_{Cool}.(T_0 - T_1)$.
This system can be considered Hybrid because of the Cooling System Switching. Through state of q, it is shown the activation of $\dot{Q}_{Cool}$.
Then, the proposed model is:
$\mathcal{H}=(C,f,D,g) \\ \\ x=\left[ \begin{array}{c} x \\ q \end{array} \right] \\ \\ C=\left\{x\in\mathbb{R}^2/(T > T_{min}\; \& \;q=0)\;|\; (T < T_{MAX}\; \& \;q=1) \right \} \\ \\ D=\left\{x\in\mathbb{R}^2/(T\geqslant T_{MAX}\; \& \;q=0)\;|\; (T\leq T_{min}\; \& \;q=1) \right \} \\ \\ f(x)=\begin{bmatrix} 0 \\ \frac{\dot{Q}+[K_{NC}+q.K_{Cool}.(T_0-T_1)]}{C} \end{bmatrix} \\ \\ \\ g(x)=\begin{bmatrix} 1-q \\ T \end{bmatrix}$
Used parameters are:
$T_{Min} = 40 \;^{o}\textrm{C} \\ T_{Max} = 45 \;^{o}\textrm{C} \\ T_{Env} = 20 \;^{o}\textrm{C} \\ \dot{Q} = 1000 \;kW \\ K_{NC} = 5 \;kW/K \\ K_{Cool} = 50 \;kW/K$
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Function run
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function run
Parameters
% initial conditions
q_0 = 0;
Temp_0 = 20;
x0 = [q_0;Temp_0];
% simulation horizon
TSPAN=[0 1000];
JSPAN = [0 20];
% rule for jumps
% rule = 1 -> priority for jumps
% rule = 2 -> priority for flows
rule = 1;
options = odeset('RelTol',1e-6,'MaxStep',.1);
maxStepCoefficient = .1; % set the maximum step length. At each run of the
% integrator the option 'MaxStep' is set to
% (time length of last integration)*maxStepCoefficient.
% Default value = 0.1
% simulate
[t x j] = HyEQsolver( @f,@g,@C,@D,x0,TSPAN,JSPAN,rule,options,maxStepCoefficient);
% plot solution
figure(1) % position
clf
subplot(2,1,1),plotflows(t,j,x(:,1))
grid on
ylabel('q')
subplot(2,1,2),plotjumps(t,j,x(:,1))
grid on
ylabel('q')
figure(2) % velocity
clf
subplot(2,1,1),plotflows(t,j,x(:,2))
grid on
ylabel('Temp')
subplot(2,1,2),plotjumps(t,j,x(:,2))
grid on
ylabel('Temp')
% plot hybrid arc
plotHybridArc(t,j,x(:,2))
xlabel('j')
ylabel('t')
zlabel('Temp')
% figure(3)
% plot(t,y(:,1));
Parameters (Parameters.m)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Definition of Parameters for Model of Cooling Hysteresis
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
global TempMin TempMax Q Knatconv Kcool TempEnv
TempMin = 50; % °C
TempMax = 55; % °C
Q = 1000; % kW
Knatconv = 5; % kW/K
Kcool = 50; % kW/K
TempEnv = 20; % °C
Flow map (f.m)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Declaration of Flow Map F
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function xdot = f(x)
% Parameters
global Q % kW
global Knatconv % kW/K
global Kcool % kW/K
global TempEnv % °C
% State variables
q = x(1);
Temp = x(2);
% Differential equations
xdot = [0 ; (Q + (Knatconv + q * Kcool) * (TempEnv - Temp)) / 1000];
end
Flow set (C.m)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Declaration of Flow Set C
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function inC = C(x)
% Out:
% 0 if value is outside C
% 1 if value is inside C
global TempMin
global TempMax
q = x(1);
Temp = x(2);
if (((Temp < TempMax) && (q == 0)) || ((Temp > TempMin) && (q == 1)))
inC = 1;
else
inC = 0;
end
end
Jump map (g.m)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Declaration of Jump Map G
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function xplus = g(x)
% State variables
q = x(1);
Temp = x(2);
xplus = [1-q ; Temp];
end
Jump set (D.m)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Declaration of Jump Set D
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function inD = D(x)
% Out:
% 0 if value is outside D
% 1 if value is inside D
global TempMin
global TempMax
q = x(1);
Temp = x(2);
if (((Temp <= TempMin) && (q == 1)) || ((Temp >= TempMax) && (q == 0)))
inD = 1;
else
inD = 0;
end
end
Figures 1 and 2 represents temperature $T_1$ and Hybrid arc $\phi$, as a function of continuous and discrete time, $t$ and $j$. |
1. F=kx (Hooke's Law) Problem
Can someone guide me through this problem:
2. A mass of 500 g stretches a spring 8.0 cm when it is attached to it.
What additional weight would you have to add to it so that the spring is stretched 10 cm?
__________________________________________________ __________________________
I tried plugging in numbers into the rule and that didn't lead me anywhere!
2. $F = kx$
$mg$ is weight ... force due to gravity
$mg = kx$
$500g = k \cdot 8$
let $m$ = the additional mass (in grams) required ...
$(500+m)g = k\cdot 10$
divide the 2nd equation by the first ...
$\frac{(500+m)g = k\cdot 10}{500g = k \cdot 8}$
$\frac{500+m}{500} = \frac{10}{8}$
solve the last equation for $m$
3. Alternative Method
As stated, the equation for Hooke's Law can be rewritten as follows:
$F=ma=mg=-kx$
The gravitational constant $g=9.81\left(\frac{m}{s^{2}}\right)$ is substituted for acceleration $a$, $k$ represents the spring constant, and $x$ is the displacement (distance stretched in meters).
Rearrange in terms of $k$ and solve:
$-k=\frac{mg}{x}$
$-k=\frac{(0.50kg)(9.81\left(\frac{m}{s^{2}}\right)) }{0.08m}$
$k=-61.31\left(\frac{N}{m}\right)$
Now rewrite the equation in its standard form $F=-kx$ and evaluate using the new length.
The rest is pretty straight-forward, just remember that weight is a measure of gravitational force and mind your significant figures (2).
4. Hooke's Law
Hello s3a
Originally Posted by s3a
Can someone guide me through this problem:
2. A mass of 500 g stretches a spring 8.0 cm when it is attached to it.
What additional weight would you have to add to it so that the spring is stretched 10 cm?
__________________________________________________ __________________________
I tried plugging in numbers into the rule and that didn't lead me anywhere!
Forget formulae. Just use common sense. Hooke's Law says that:
• The tension in an elastic string is proportional to its extension.
In other words, if you double one, you double the other; if you increase one by 50%, you increase the other by 50%, and so on.
You want to increase the extension from 8 cm to 10 cm; in other words, by 25%. So increase the tension by 25% as well. Put on an extra mass of 25% of 500 gm. In other words, you'll need an extra 125 gm.
That was easy, wasn't it?
5. Well, you can solve it using common sense. But using formula, and understand enable you to solve any question.
F=kx
mg=kx
500g=8k (1) initial condition
let say the weight to strecth the spring 10cm is M
Mg=10k (2)
(2)/(1):
M/500=10/8
you will get M = 625g
so it will require additional 125g
6. Hooke's Law
Hello elliotyang
Of course, I'm not suggesting that you should always forget the formula. But you should always look for a simple approach first! When solving a quadratic, for example, always try to factorise before reaching for the formula.
7. But the answer is 1.2N.
8. Originally Posted by s3a
mass is not weight ... mg = W
125 g = .125 kg
(.125 kg)(9.8 m/s^2) = 1.225 N
9. Originally Posted by Knowledge
As stated, the equation for Hooke's Law can be rewritten as follows:
$F=ma=mg=-kx$
The gravitational constant $g=9.81\left(\frac{m}{s^{2}}\right)$ is substituted for acceleration $a$, $k$ represents the spring constant, and $x$ is the displacement (distance stretched in meters).
Rearrange in terms of $k$ and solve:
$-k=\frac{mg}{x}$
$-k=\frac{(0.50kg)(9.81\left(\frac{m}{s^{2}}\right)) }{0.08m}$
$k=-61.31\left(\frac{N}{m}\right)$
Now rewrite the equation in its standard form $F=-kx$ and evaluate using the new length.
The rest is pretty straight-forward, just remember that weight is a measure of gravitational force and mind your significant figures (2).
s3a,
If you had followed my methodology, then you would have arrived at the correct answer. The above process is most likely the preferred one for a high school physics student.
To conclude where I left off above:
$F=-kx$
$F=-(-61.31\left(\frac{N}{m}\right))(0.10m)$ Notice how the units cancel.
$F_(required)=6.131N$
$F_(original)=(0.50kg)(9.81\left(\frac{m}{s^{2}}\ri ght))=4.905N$
$F_(additional)=F_(required)-F_(original)$
$F_(additional)=6.131N-4.905N$
$F_(additional)=1.2N$
10. Dear Knowledge,
I think your approach is kind of long winded.
For this case, F=-kx=mg
it is obviously seen that g and k are constant when compare the two situation since g is 9.81m/s^2 while k=spring constant. (since using the same spring, this vqlue should be the same for both)
So from this, we can deduce that m is directly proportional to x.
use this relation would be much faster.
instead you plug in the value into the formula compute, and then reverse it. of course it is not wrong, just kind of wasting time.
11. Originally Posted by elliotyang
Dear Knowledge,
I think your approach is kind of long winded.
For this case, F=-kx=mg
it is obviously seen that g and k are constant when compare the two situation since g is 9.81m/s^2 while k=spring constant. (since using the same spring, this vqlue should be the same for both)
So from this, we can deduce that m is directly proportional to x.
use this relation would be much faster.
instead you plug in the value into the formula compute, and then reverse it. of course it is not wrong, just kind of wasting time.
ElliotYang,
While my approach/explanation may be kind of "long winded" for people who have already had advanced math/physics classes, it is obvious the OP did not follow the rationale of using a proportion, nor did the OP understand the concept of Hooke's Law; i.e. the difference between mass and weight (hence the reason I included all of the units).
Furthermore, it is my opinion that this is not a forum to merely supply answers for a student's homework problems, but a place to further their understanding of the concept(s) and reinforce proper methods in preparation for higher level courses.
If the OP, or anyone for that matter, wants to be successful in advanced physics, it is a good idea to develop sound habits for problem solving from the beginning - because the concepts, equations, and methods only become more complex and involved.
12. it is a good idea to develop sound habits for problem solving from the beginning - because the concepts, equations, and methods only become more complex and involved.
I agree must understand the concept for problem solving. I just try to provide alternative, easier way to solve. Reduce it to the simplest form then only plug in the value. Too many value will sometimes lead to careless, besides need to take care about significant figures and decimal places all.
13. Easiest way using canadian teaching method
First easiest way to do it is to first get all your units in their easiest form in canada on the end of year exam they usually ask you use kg-m-N
so
500g becomes .5kg -- F=mg F=.5(9.8) = 4.9N
80cm becomes .08m
so now using the rule F=kx you need
Force in N=force constant N/m * displacement in m
4.9N = k * .08m
k=4.9/.o8
k=61.25
It is the same spring so same K value, we are now given a new displlacement
k=61.25 N/m
x=10cm = .1m
With this info we can find the Force
F=kx
F=(61.25)(10)
F=6.125N
Now that we have the new force you can get the total weight
F=mg
6.125=m(9.8)
m=.625
that is the total weight they asked you how much was added on so just take your total weight and substract your initial weight |
# Chain rule
In calculus, the chain rule describes the derivative of a "function of a function": the composition of two function, where the output z is a given function of an intermediate variable y which is in turn a given function of the input variable x.
Suppose that y is given as a function $\,y = g(x)$ and that z is given as a function $\,z = f(y)$. The rate at which z varies in terms of y is given by the derivative $\, f'(y)$, and the rate at which y varies in terms of x is given by the derivative $\, g'(x)$. So the rate at which z varies in terms of x is the product $\,f'(y)\sdot g'(x)$, and substituting $\,y = g(x)$ we have the chain rule
$(f \circ g)' = (f' \circ g) \sdot g' . \,$
In order to convert this to the traditional (Leibniz) notation, we notice
$z(y(x))\quad \Longleftrightarrow\quad z\circ y(x)$
and
$(z \circ y)' = (z' \circ y) \sdot y' \quad \Longleftrightarrow\quad \frac{\mathrm{d} z(y(x))}{\mathrm{d} x} = \frac{\mathrm{d} z(y)}{\mathrm{d} y} \, \frac{\mathrm{d} y(x)}{ \mathrm{d} x} . \,$.
In mnemonic form the latter expression is
$\frac{\mathrm{d} z}{\mathrm{d} x} = \frac{\mathrm{d} z}{\mathrm{d} y} \, \frac{\mathrm{d} y}{ \mathrm{d} x} , \,$
which is easy to remember, because it as if dy in the numerator and the denominator of the right hand side cancels.
## Multivariable calculus
The extension of the chain rule to multivariable functions may be achieved by considering the derivative as a linear approximation to a differentiable function.
Now let $F : \mathbf{R}^n \rightarrow \mathbf{R}^m$ and $G : \mathbf{R}^m \rightarrow \mathbf{R}^p$ be functions with F having derivative DF at $a \in \mathbf{R}^n$ and G having derivative DG at $F(a) \in \mathbf{R}^m$. Thus DF is a linear map from $\mathbf{R}^n \rightarrow \mathbf{R}^m$ and DG is a linear map from $\mathbf{R}^m \rightarrow \mathbf{R}^p$. Then $F \circ G$ is differentiable at $a \in \mathbf{R}^n$ with derivative
$\mathrm{D}(F \circ G) = \mathrm{D}F \circ \mathrm{D}G . \,$ |
## AravindG 3 years ago hav some physics doubt anyone can help ???
1. AravindG
dashini!!!
2. DHASHNI
ya i do!!!! gd morn aravind
3. DHASHNI
where are you from ?????????????
4. AravindG
gd morns...wel i hav some quesions can u help me answer them??i am from kerala u??
5. DHASHNI
from TN
6. DHASHNI
so wats ur question!!!!!
7. AravindG
wow .can u help??there are some questions....
8. AravindG
dashiniiiii
9. DHASHNI
ya iam waiting 4 ur question!!!
10. AravindG
k.in the diagram below block A B and C weigh 3,4 and 8 kg .The coefficient of friction between any two surfaces is ..A is held at rest by a mass less rigid rod fixed to the wall while B and C are connected by a light flexible cord passing around a fixed frictionless pulley .Find the force F necessary to drag C along the horizontal surface to left at constant speed.Assume that the arrangement shown in figure,B nad C and A on B ,is maintained all through.
11. DHASHNI
$\eta=???$
12. AravindG
|dw:1316326617743:dw|
13. DHASHNI
can u pls show the diagram!!!!
14. AravindG
dashini
15. nilankshi
i am not able to understand the qus also;(
16. DHASHNI
because u r a babe!!!!!
17. nilankshi
nnnnnnnnnnnooooooooooooooo
18. DHASHNI
aravind!!!!! |
# Problem: Transportation Price
A student has to travel n kilometers. He can choose between three types of transportation:
• Taxi. Starting fee: 0.70 EUR. Day rate: 0.79 EUR/km. Night rate: 0.90 EUR/km.
• Bus. Day / Night rate: 0.09 EUR/km. Can be used for distances of minimum 20 km.
• Train. Day / Night rate: 0.06 EUR/km. Can be used for distances of minimum 100 km.
Write a program that reads the number of kilometers n and period of the day (day or night) and calculates the price for the cheapest transport.
## Input Data
Two lines are read from the console:
• The first line contains a number n – number of kilometers – an integer in the range of [1 … 5000].
• The second line contains the word “day” or “night” – traveling during the day or during the night.
## Output Data
Print on the console the lowest price for the given number of kilometers.
## Sample Input and Output
Input Output Input Output
5
day
4.65 7
night
7
Input Output Input Output
25
day
2.25 180
night
10.8
## Hints and Guidelines
We will read the input data and depending on the distance, we will choose the cheapest transport. To do that, we will write a few conditional statements.
### Processing the Input Data
In the task, we are given information about the input and output data. Therefore, in the first two lines from the solution, we will declare and initialize the two variables that are going to store the values of the input data. The first line contains an integer and that is why the declared variable will be of int type. The second line contains a word, therefore, the variable will be of string type.
Before starting with the conditional statements, we need to declare a variable that stores the value of the transport price.
### Calculating Taxi Rate
After having declared and initialized the input data and the variable that stores the value of the price, we have to decide which conditions of the task have to be checked first.
The task specifies that the rates of two of the vehicles do not depend on whether it is day or night, but the rate of one of the transports (taxi) depends. This is why the first condition will be whether it is day or night, so that it is clear which rate the taxi will be using. To do that, we declare one more variable that stores the value of the taxi rate.
In order to calculate the taxi rate, we will use conditional statement of type if-else and through it, the variable for the price of the taxi will store its value.
### Calculating Transportation Price
After having done that, now we can start calculating the transport price itself. The constraints in the task refer to the distance that the student wants to travel. This is why, we will use an if-else statement that will help us find the price of the transport, depending on the given kilometers.
First, we check whether the kilometers are less than 20, as the task specifies that the student can only use a taxi for less than 20 kilometers. If the condition is true (returns true), the variable that is created to store the value of the transport (price), will store the corresponding value. This value equals the starting fee that we will sum with its rate, multiplied by the distance that the student has to travel.
If the condition of the variable is not true (returns false), the next step of our program is to check whether the kilometers are less than 100. We do that because the task specifies that in this range, a bus can be used as well. The price per kilometer of a bus is cheaper than a taxi one. Therefore, if the result of the condition is true, we store a value, equal to the result of the multiplication of the rate of the bus by the distance to the variable for the transportation price in the else if statement body.
If this condition does not return true as a result, we have to store a value, equal to the result of the multiplication of the distance by the train rate to the price variable in the else body. This is done because the train is the cheapest transport for the given distance.
### Printing the Output Data
After we have checked the distance conditions and we have calculated the price of the cheapest transport, we have to print it. The task does not specify how to format the result, therefore, we just print the variable. |
# Is there a quantum NC algorithm for computing GCD?
From the comments on one of my questions on MathOverflow I get the feeling that the question regarding GCD being in $\mathsf{NC}$ vs. $\mathsf{P}$ is akin to the question regarding Integer Factorization being in $\mathsf{P}$ vs. $\mathsf{NP}$.
Is there something like a "quantum $\mathsf{NC}$" algorithm for GCD as there is a quantum polynomial time ($\mathsf{BQP}$) algorithm for Integer Factorization?
Related question: complexity of greatest common divisor (gcd)
• when you cross-post it's better to write the question again. – Alessandro Cosentino Mar 7 '13 at 9:49
## 1 Answer
First of all, there is a formal definition of "quantum-NC", see QNC on the zoo.
GCD is indeed a good candidate for a problem that could be shown to be in QNC, but it's not known to be in NC. However, finding a QNC algorithm for GCD is still an open problem.
The feeling for which this is believed to be true comes from the fact that the Quantum Fourier Transform can be done in QNC.
Reference: Conclusion section of "R. Cleve and J. Watrous, Fast parallel circuits for the quantum Fourier transform", arXiv:quant-ph/0006004
• It would be nice if you can explain the relation between quantum Fourier transform and GCD. – Kaveh Mar 7 '13 at 17:16
• I agree with Kaveh. It would be nice to provide the relation. – T.... Mar 8 '13 at 2:56
• I don't think there is a direct relation. What I meant to say is that we suspect QNC to be more powerful than NC, because we can do QFT in QNC. So we ask if there is some more natural problem that is in QNC too, and one of the simplest natural problem that we don't know how to do in NC is GCD. At some point I had suspected that there is a relation between the two problems coming from the fact that QFT and GCD are both used as sub-routines in the period-finding algorithm, but I wasn't able to make this formal. Maybe other users can enlighten us more. – Alessandro Cosentino Mar 8 '13 at 13:16
• Hi Alessandro: Do you know if Polynomial GCD is in NC? – T.... Mar 28 '13 at 13:22
• @Arul: yes, it is. See von zur Gathen, Parallel algorithms for algebraic problems. dx.doi.org/10.1145/800061.808728 – Alessandro Cosentino Mar 28 '13 at 13:57 |
## April 29, 2004
### CFTs from OSFT?
#### Update 19 May 2004
I have finally found a paper which pretty much precisely discusses what I was looking for here, namely a relation between classical solutions of string field theory and deformations of the worldsheet (boundary-) conformal field theory. It’s
J. Klusoň: Exact Solutions in SFT and Marginal Deformation in BCFT (2003)
and it discusses how OSFT actions expanded about two different classical solutions correspond to two worldsheet BCFTs in the case where the latter are related by marginal deformations. In the words of the author of the above paper (p. 2):
Our goal is to show that when we expand [the] string field around [a] classical solution and insert it into the original SFT action $S$ which is defined on [a given] BCFT, we obtain after suitable redefinition of the fluctuation modes the SFT action $S\prime$ defined on $\mathrm{BCFT}\prime\prime$ that is related to the original BCFT by inserting [a] marginal deformation on the boundary of the worldsheet. […] To say differently, we will show that two SFT action $S$, $S\prime$ written using two different BCFT, $\mathrm{BCFT}\prime$ which are related by marginal deformation, are in fact two SFT actions expanded around different classical solutions.
In equation (2.31) the deformed BRST operator is given, which is what I discuss in the entry below, but then it is shown in (3.8) that this operator can indeed be related to a (B)CFT with marginal deformation.
One subtlety of this paper is that the classical SFT solutions which are considered are large but pure gauge and hence naively equivalent to the trivial solution $\Phi_0 = 0$, but apparently only naively so. To me it would be intreresting if similar results could be obtained for more general classical solutions $\Phi_0$.
#### Update 3rd May 2004
I have now some LaTeXified notes.
Here is a rather simple — indeed almost trivial — observation concerning open string field theory (OSFT) and deformations of CFTs, which I find interesting, but which I haven’t seen discussed anywhere in the literature. That might of course be just due to my insufficient knowledge of the literature, in which case somebody please give me some pointers!
#### Update 7th May 2004
I have by now found some literature where this (admittedly very simple but interesting) observation actually appears, e.g.
Here goes:
There have been some studies (few, though) of worldsheet CFTs for various backgrounds in terms of deformed BRST operators. I.e., starting from the BRST operator $Q_B$ for a given background, like for instance flat Minkowski space, one may consider the operator
(1)$\tilde Q_B := Q_B + \hat \Phi\,,$
where $\hat \Phi$ is some operator such that nilpotency $\tilde Q_B^2 = 0$ is preserved.
By appropriately commuting $\tilde Q_B$ with the ghost modes the conformal generators $\tilde L_m^\mathrm{tot}$ of a new CFT in a new background are obtained (the new background might of course be gauge eqivalent to the original one).
See for instance
Mitsuhiro Kato: Physical Spectra in String Theories — BRST Operators and Similarity Transformations (1995)
and
Ioannis Giannakis: Strings in Nontrivial Gravitino and Ramond-Ramond Backgrounds (2002).
One problem is to understand the operators $\hat \Phi$, how they have to be chosen and how they encode the information of the new background.
Here I want to show, in the context of open bosonic strings, that the consistent operators $\hat \Phi$ are precisely the operators of left plus right star-multiplication by the string field $\Phi$ which describes the new background in the context of open string field theory.
In order to motivate this consider the (classical) equation of motion of cubic open bosonic string field theory for a string field $\Phi$ of ghost number one:
(2)$Q_B|\Phi\rangle + |\Phi \star \Phi\rangle = 0 \,,$
where for simplicity of notation the string field has been rescaled by a constant factor.
(I am using the notation as for instance in section 2 of
Kazuki Ohmori: A Review on Tachyon Condensation in Open String Field Theories (2001).)
If we now introduce $\hat \Phi$, the operator of star-multiplication by $\Phi$ defined by
(3)$\hat \Phi |\Psi\rangle := | \Phi \star \Psi \rangle$
then, due to the associativity of the star product this can equivalently be rewritten as an operator equation
(4)$\left( Q_B + \hat \Phi \right)^2 = 0$
because
(5)$\left( Q_B + \hat \Phi \right) \circ \left( Q_B + \hat \Phi \right) \circ = \underbrace{ Q_B \circ Q_B \circ }_{= 0} + \underbrace{ Q_B \circ \Phi \star + \Phi \star Q_B \circ }_{= (Q_B \Phi) \star } + \underbrace{ \Phi \star \Phi \star }_{ = (\Phi \star \Phi) \star} \,.$
(Here it has been used that $Q_B$ is an odd graded (with respect to ghost number) derivation on the star-product algebra of string fields, that $\Phi$ is of ghost number 1 and that the star-product is associative.)
It hence follows that the equations of motion of the string field $\Phi$ are precisely the necessary and sufficient condition for the operator $\hat \Phi$ to yield a nilpotent, unit ghost number deformation
(6)$\tilde Q_B = Q_B + \hat \Phi$
of the original BRST operator.
But there remains the question why $\tilde Q_B$, while nilpotent, can really be interpreted as a BRST operator of some sensible CFT. (Surely not every nilpotent operator on the string Hilbert space can be identified as a BRST operator!) The reason seems to be the following:
#### Update 21 May 2004
I have found out by now that what I was trying to argue here has already been found long ago in papers on background independence of string field theory. For instance on p.2 of
it says:
In this paper we show that if $\Psi_\mathrm{cl}$ is a solution of the classical equations of motion derived from the action $S(\Psi)$, then it is possible to construct an operator $\hat Q_B$ in terms of $\Psi_\mathrm{cl}$, acting on a subspace of the Hilbert space of combined matter-ghost CFT, such that $(\hat Q_B)^2 = 0$. $\hat Q_B$ may be interpreted as the BRST charge of the two dimensional field theory describing the propagation of the string in the presence of the background field $\Psi_\mathrm{cl}$.
We may consider, in the context of open bosonic string field theory, the motion of a single ‘test string’ in the background described by the excitatoins $\Phi$ by adding a tiny correction field $\psi$ to $\Phi$, which we want to interpret as the string field due to the single test string.
The question then is: What is the condition on $\psi$ so that the total field $\Phi + \psi$ is still a solution to the equations of motion of string field theory. That is, given $\Phi$, one needs to solve
(7)$Q_B(\Phi + \psi) + (\Phi + \psi)\star (\Phi + \psi) = 0$
for $\psi$. But since $\psi$ is supposed to be just a tiny perturbation of the filed $\Phi$ it must be sufficient to work to first order in $\psi$. This is equivalent to neglecting any self-interaction of the string described by $\psi$ and only considering its interaction with the ‘background’ field $\Phi$ - just as in the first quantized theory of single strings.
But to first order and using the fact that $\Phi$ is supposed to be a solution all by itself the above equation says that
(8)$Q_B |\psi\rangle + |\Phi \star \psi\rangle + | \psi \star \Phi \rangle = 0 \,.$
This is manifestly a deformation of the equation of motion
(9)$Q_B |\psi\rangle = 0$
of the string described by the state $\psi$ in the original background. Hence it is consistent to interpret
(10)$\tilde Q_B = Q_B + \{ \hat \Phi, \cdot \}$
as the new worldsheet BRST operator which corresponds to the new background described by $\Phi$.
If we again switch to operator notation the above can equivalently be rewritten as
(11)$\{ (Q_B + \hat \Phi), \hat \psi \} = 0 \,,$
where the braces denote the anticommutator, as usual.
Recalling that a gauge transformation $\Phi \to \Phi + \delta \Phi$ in string field theory is (for $\Lambda$ a string field of ghost number 0) of the form
(12)$\delta \Phi = Q_B \Lambda + \Phi \star \Lambda - \Lambda \star \Phi$
and that in operator language this reads equivalenty
(13)$\hat{\delta \Phi} = [ (Q_B + \hat \Phi), \hat \Lambda ] = 0$
one sees a close connection of the deformed BRST operator to covariant exterior derivatives.
As is very well known (for instance summarized in the table on p. 16 of the above review paper) there is a close analogy between string field theory formalism and exterior differential geometry.
The BRST operator $Q_B$ plays the role of the exterior derivative, the $c$ ghost correspond to differential form creators, the $b$-ghosts to form annihilators and the $\star$ product to the ordinary wedge ($\wedge$) product - or does it?
As noted on p.16 of the above review, the formal correspondence seems to cease to be valid with respect to the graded commutativity of the wedge product. Namely in string field theory
(14)$\Phi \star \psi \neq \pm \Psi \star \Phi$
in general.
But the above considerations suggest an interpretation of this apparent failed correspondence, which might show that indeed the correspondence is better than maybe expected:
The formal similarity of the deformed BRST operator $\tilde Q_B = Q_B + \hat \Phi$ to a gauge covariant exterior derivative $\mathbf{d} + \mathbf{\omega}$ suggests that we need to interpret $\Phi$ not simply as a 1-form, but as a - connection!
That is, $\Phi$ would correspond to a Lie-algebra valued 1-form and the $\star$-product would really be exterior wedge multiplication together with the Lie product, as very familiar from ordinary gauge field theory. For instance we would have expression like
(15)$(\mathbf{d} + \mathbf{\omega})^2 = (\mathbf{d\omega})\delta^a{}_b + \mathbf{\omega}^a{}_c \wedge \mathbf{\omega}^c{}_b \,.$
In such a case it is clear that the graded commutativity of the wedge product is broken by the Lie algebra products.
Is it consistent to interpret the star product of string field theory this way? Seems to be, due to the following clue:
Under the trace graded commutativity should be restored. The trace should appear together with the integral as in
(16)$\int \mathrm{tr} \mathbf{\omega}^a{}_c \wedge \mathbf{\gamma}^c{}_b = \pm \int \mathrm{tr} \mathbf{\gamma}^a{}_c \wedge \mathbf{\omega}^c{}_b \,.$
But precisely this is what does happen in open string field theory in the formal integral. There we have
(17)$\int \Phi \star \Psi = \pm \int \Psi \star \Phi \,.$
All this suggests that one should think of the deformed BRST operator as morally a gauge covariant exterior derivative:
(18)$\tilde Q_B = Q_B + \hat \Phi \sim \mathbf{d} + \mathbf{\omega} \,.$
That looks kind of interesting to me. Perhaps it is not new (references, anyone?), but I have never seen it stated this way before. This way the theory of (super)conformal deformations of (super)conformal field theories might nicely be connected to string field theory.
In particular, it would be intersting to check the above considerations by picking some known solution $\Phi$ to string field theory and computing the explicit realization of $\tilde Q_B$ for this background field, maybe checking if it looks the way one would expect from, say, worldsheet Lagrangian formalism in the given background.
Posted at 6:42 PM UTC | Permalink | Followups (95)
## April 26, 2004
### Billiards at half-past E10
#### Posted by Urs Schreiber
(title?)
Last week I gave a seminar talk on cosmological billiards and their relation to proposals that M-theory might be described by a 1+0 dimensional sigma-model on the group of $E_{10}/K(E_{10})$. I had mentioned that already several times here at the Coffee table and we had some interesting discussion over at sci.physics.strings. But while preparing the talk it occured to me that the basic technical observation behind this conjecture is so simple and beautiful that it deserves a seperate entry. I’ll summarize pp. 65 of
T. Damour, M. Henneaux & H. Nicolai: Cosmological Billiards (2002).
So how would the equations of motion of geodesic motion on a Kac-Moody algebra group manifold look like, in general?
A Kac-Moody (KM) algebra is a generalization of an ordinary Lie algebra. It is determined by its rank $r$, an $r \times r$ Cartan matrix $A = (a_{ij})$ and is generated from the $3r$ Chevalley-Serre generators
(1)$\{h_i, e_i f_i\}_{i=1,\cdots,r}$
which have commutators
(2)$[h_i,h_j] = 0$
(3)$[e_i,f_j] = \delta_{ij} h_j$
(4)$[h_i,e_j] = a_{ij}\, e_j$
(5)$[h_i,f_j] = - a_{ij} f_j \,.$
(One should think of the SU(2) example where $h = J^3$, $e = J^+$ and $f = J^-$.)
The elements of the algebra are obtained by forming multiple commutators of the $e_i$ and the $f_i$.
(6)$E_{\alpha,s} = [e_{i_1},[e_{i_2},[\cdots ,[e_{i_{p-1},e_{i_p}}]\cdots]$
(7)$E_{-\alpha,s} = [f_{i_1},[f_{i_2},[\cdots ,[f_{i_{p-1},f_{i_p}}]\cdots] \,.$
Here, as always in Lie algebra theory, the $\alpha$ are the so called roots, i.e. the ‘quantum numbers’ with respect to the Cartan subalgebra generators $h_i$:
(8)$[h_i, E_{\alpha,s}] = \alpha_i E_{\alpha,s}$
and $s = 1, \cdots \mathrm{mult}(\alpha)$ is an additional index due to possible degeneracies of the $\alpha$s. Not all of these elements are to be considered different, but instead one has to mod out by the Serre relations
(9)$\mathrm{ad}(e_i)^{1-a_{ij}}(e_j) = 0$
(10)$\mathrm{ad}(f_i)^{1-a_{ij}}(f_j) = 0$
which should be thought of as saying that the $E_\alpha$ are nilpotent ‘matrices’ with entries above the diagonal, while the $E_{-\alpha}$ are nilpotent with entries below the diagonal.
(I’d be grateful if anyone could tell me why these Serre relations need to look the way they do…)
A set of simple roots $\alpha^i$ generates, by linear combination, all of the roots and the Cartan matrix is equal to the normalized inner products of the simple roots
(11)$a_{ij} = 2\frac{\langle \alpha^i | \alpha^j \rangle}{\langle \alpha^i | \alpha^i \rangle} \,,$
where the product is taken with respect to the unique invariant metric (just as for ordinary Lie algebras).
The nature of the KM algebra depends crucially on the signature of the Cartan matrix $A$. We have three cases:
• If $A$ is positive definite, then the KM algebra is just an ordinary finte Lie algebra $[T^a,T^b] = f^{ab}{}_c T_c$.
• If $A$ is semidefinite, then the KM algebra is an infinite affine Lie algebra, or equivalently a current algebra in 1+1 dimensions: $[j^a_m, j^b_n] = m \eta^{ab}\delta_{m+n,0} + f^{ab}{}_c j^c_{m+n}$.
• If, however, $A$ is indefinite, we obtain an infinite KM algebra with exponential growth, which is, I am being told, relatively poorly understood in general. But this is the case of interest here!
A general element of the group obtained from such an algebra by formal exponentiation is of the form
(12)$\mathcal{V} = \underbrace{\exp(\beta^i h_i)}_{=\mathcal{A}} \underbrace{\exp\left( \sum_{\alpha,s} \nu_{\alpha,s} E_{\alpha,s} \right)}_{= \mathcal{N}} \,.$
If we make the coefficients functions of a single parameter $\tau$ then we get the tangent vectors $P$ to the trajectory in the group ‘manifold’ traced out by varying $\tau$ by writing:
(13)$P := \frac{1}{2} \left( \dot \mathcal{V} \mathcal{V}^{-1} + \left( \dot \mathcal{V} \mathcal{V}^{-1} \right)^{\mathrm{T}} \right) \,.$
This is exactly as for any old Lie group, the only difference being that we have here projected onto that part which is ‘symmetric’ with respect to the Chevalley involution which sends
(14)$h_i^\mathrm{T} = h_i$
(15)$e_i^\mathrm{T} = f_i$
(16)$f_i^\mathrm{T} = e_i \,.$
The algebra factored out this way is the maximal compact subalgebra of our KM algebra, so that we are really dealing with the remaining coset space (which, unless I am confused, should hence be a symmetric space).
Now the fun thing which I wanted to get at is this: If we define generalized momenta $j_{\alpha,s}$ such that
(17)$\dot \mathcal{N} \mathcal{N}^{-1} = \sum_{\alpha,s} j_{\alpha,s} E_{\alpha,s} \,,$
then it is very easy to check, using the defining relations of the KM algebra, that
(18)$P = \dot \beta^i h_i + \frac{1}{2} \sum_{\alpha,s} j_{\alpha,s} \exp(\alpha_i \beta^i) \left( E_{\alpha,s} + E_{-\alpha,s} \right) \,.$
Using the fact that the non-vanishing inner products of the algebra elements are
(19)$\langle h_i| h_j\rangle = g_{ij}$
(20)$\langle E_{\alpha,s}| E_{\beta,s}\rangle = \delta_{s,t} \delta_{\alpha+\beta,0}$
one finally finds the Lagrangian describing geodesic motion of the coset space:
(21)$\mathcal{L} \propto \langle P | P \rangle = g_{ij}\dot \beta^i \dot \beta^j + \frac{1}{2} \sum_{\alpha,s} \exp(2 \alpha_i \beta^i) j_{\alpha,s}^2 \,.$
(Here $g$ is the invariant metric of the algebra.)
The point is that there is a free kinetic term in the Cartan subalgebra plus all the off-diagonal kinetic terms which all couple exponentially to the Cartan subalgebra coordinates.
It is obvious that for very large values of $\beta$ the off-diagonal terms ‘freeze’ and leave behind effective potential walls which constrain the motion of the $\beta$s to lie within the Weyl chamber of the algebra, namely that poly wedge associated with the simple roots (all other roots generate potential walls which lie behind those of the simple roots.)
Anyone familiar with classical cosmology immediately recognizes the above Lagrangian as being precisely of the form as those mini/midi superspace Lagrangians that govern the dynamics of homogeneous modes of general relativity. There the $\beta^i$ are the logarithms of the spatial scale factors of the universe.
Indeed, it can be checked to low order that the Lagrangian of $E_{10}$ in the above sense reproduces that of 11d SUGRA when the latter is accordingly suitably expanded about homogeneous modes. That’s the content of
T. Damour, M. Henneaux & H. Nicolai: $E_{10}$ and the ‘small tension’ expansion of M Theory (2002).
But the crucial point is that there are many more degrees of freedom in the $E_{10}$ sigma model than can correspond to supergravity. There are indications that these can indeed be associated with brane degrees of freedom of M-theory:
Jeffrey Brown, Ori Ganor & C. Helfgott: M-theory and $E_{10}$: Billiards, Branes, and Imaginary Roots,
which, unfortunately, I still have not read completely.
Posted at 12:27 PM UTC | Permalink | Followups (1)
## April 23, 2004
### New York, New York
#### Posted by Urs Schreiber
I have visited Ioannis Giannakis at Rockefeller University, New York, last week, and by now I have recovered from my jet lag and caught up with the work that has piled up here at home enough so that I find the time to write a brief note to the Coffee Table.
Ioannis Giannakis has worked on infinitesimal superconformal deformations and I became aware of his work while I happened to write something on finite deformations of superconformal algebras myself. In New York we had some interesting discussion in particular with regard to generalizations of the formalism to open strings and to deformations that describe D-brane backgrounds.
The theory of superconformal deformations was originally motivated from considerations concerning the effect of symmetry transformations of the background fields on the worldsheet theory. It so happened that while I was still in New York a heated debate concerning the nature of such generalized background gauge symmetries and their relation to the worldsheet theory took place on sci.physics.strings.
People interested in these questions should have a look at some of the literature, like
Jonathan Bagger & Ioannis Giannakis: Spacetime Supersymmetry in a nontrivial NS-NS superstring background (2001)
and
Mark Evans & Ioannis Giannakis: T-duality in arbitrary string backgrounds (1995) ,
but the basic idea is nicely exemplified in the theory of a single charged point-particle in a gauge field $A$ with Hamiltonian constraint $H = (p-A)^2$. A conjugation of the constraint algebra and the physical states with $\exp(i \lambda)$ induces of course a modification of the constraint
(1)$(p- A)^2 \to (p - A + d\lambda)^2$
which corresponds to a symmetry tranformation in the action of the background field $A$. In string theory, with its large background gauge symmetry (corresponding to all the null states in the string’s spectrum) one can find direct generalizations of this simple mechanism. (Due to an additional subtlety related to normal ordering, these are however fully under control only for infinitesimal shifts or for finite shifts in the classical theory.)
More importantly, as in the particle theory, where the trivial gauge shift $p \to p + d\lambda$ tells us that we should really introduce gauge connections $A$ that are not pure gauge, one can try to guess deformations of the worldsheet constraints that correspond to physically distinct backgrounds. This is the content of the theory of (super)conformal deformations. My idea was that there is a systematic way to find finite superconformal deformations by generalizing the technique used by Witten in the study of the relation of supersymmetry to Morse theory. The open question is how to deal consistently with the notion of normal ordering as one deforms away from the original background.
In order to understand this question better I tried to make a connection with string field theory:
Consider cubic bosonic open string field theory with the string field $\phi$ the BRST operator $Q$ for flat Minkowski background and a star product $\star$, where the (classical) equations of motion for $\phi$ are
(2)$Q \phi + c \phi \star \phi = 0 \,,$
for some constant $c$.
In an attempt to understand if this tells me anything about the propagation of single strings in the background described by $\phi$ I considered adding an infinitesimel ‘test field’ $\psi$ to $\phi$ and checking what equations of motion $\psi$ has to satisfy in order that $\phi + \psi$ is still a solution of string field theory. To first order in $\psi$ one finds
(3)$\left(Q + c \left\{\phi \star, \cdot\right\}\right) |\psi\rangle = 0 \,.$
If we think of the ‘test field’ $\psi$ as that representing a single string, then it seems that one has to think of
(4)$Q^\prime := Q + c \left\{\phi \star, \cdot\right\}$
as the deformed BRST operator which corresponds to the background described by the background string field $\phi$.
It is due to the fact that $a \star b$ and $b \star a$ in string field theory have no obvious relation that I find it hard to see whether $Q^\prime$ is still a nilpotent operator, as I would suspect it should be.
But assuming it is and that its interpretation as the BRST operator corresponding to the background described by $\phi$ is correct, then it would seem we learn something about the normal ordering issue referred to above: Namely as all of the above string field expressions are computed using the normal ordering of the free theory it would seem that the same should be done when computing the superconformal deformations. But that’s not clear to me, yet.
The campus of Rockefeller University.
Posted at 5:25 PM UTC | Permalink | Followups (17)
## April 14, 2004
### Power Supply
#### Posted by Urs Schreiber
I am currently visiting the Albert-Einstein institute in Potsdam (near Berlin). Hermann Nicolai had invited me for a couple of days in order to talk and think about Pohlmeyer invariants and related issues of string quantization.
It so happened that when I checked my e-mail while sitting in the train to Berlin I found a mail by Thomas Thiemann, Karl-Henning Rehren and Dorothea Bahns in my inbox, containing a pdf-draft of some new notes concerning what they flatteringly call “Schreiber’s DDF functionals” but what really refers to the insight that the Pohlmeyer invariants are a subset of all DDF invariants.
Glad that my journey should have such a productive beginning I read through the notes and began typing a couple of comments - when I realized that my notebook battery was almost empty.
Here is a little riddle: What are all the places in a german “Inter City Express” train where you can find a 230V power supply?
Right, there is one at every table. But when it’s the end of the Easter holidays all tables are occupied and when nobody is willing to let you sit on his (or her) lap then that’s it - or is it?
Not quite. For the urgent demands of carbon-based life forms there is fortunaly a special room - and it does have a socket, just in case anyone feels like shaving on a train. I spare you the details, but in any case this way when I arrived at the AEI the discussion had already begun. :-)
After further discussion of Thiemann’s and Rehren’s comments with Kasper Peeters and Hermann Nicolai we came to believe that there are in fact no problems with quantizing the Pohlmeyer invariants in terms of DDF invariants. I wrote up a little note concerning the question if there are any problems due to the fact that the construction of the DDF invariants requires specifying a fixed but arbitrary lightlike vector on target space. One might think that this does not harmonize with Lorentz invariance, but in fact it does. I am still waiting for Thiemann’s and Rehren’s reply, though. Hopefully we don’t have to fight that out on the arXive! ;-)
It turned out that I am currently apparently the only one genuinly interested in what the Pohlmeyer invariants could be good for in standard string theory. It seems that everbody else either regards them as a possibility to circumvent standard results - or as an irrelevant curiosity.
Here is a sketchy list of some questions concerning Pohlmeyer invariants that I would find interesting:
The existence of Pohlmeyer invariants gives us a map from the Hilbert space of the single string to states in totally dimensionally reduced (super) Yang-Mills theory. Namely, every state |psi> of the string (open, say) gives us a map from the space of u(N) matrices to the complex numbers, defined by
(1)$M : u(N) \to C$
(2)$A \mapsto \langle \psi| \mathrm{Tr P} \exp( \int A_\mu \mathcal{P}^\mu(\sigma)\,d\sigma ) |\psi\rangle .$
Does conversely every state on $u(N)$ define a state of string? Apparently the answer is Yes. .
What is the impact in this context of the fact that the Pohlmeyer holonomies
(3)$\mathrm{Tr P} \exp( \int A_\mu \mathcal{P}^\mu(\sigma) \,d\sigma)$
are Virasoro invariants? We have a vague understanding (hep-th/9705128) of what the map from $A$ to $|\psi\rangle$ has to do with string field theory. Can something similar be said about the Pohlmeyer map from $|\psi\rangle$ to $A$?
What is the meaning on the string theory side of a Gaussian ensemble in $u(N)$, as used in Random Matrix Theory?
I have a very speculative speculation concerning this last question: We know that the IKKT action is just BFSS at finite temperature. But the BFSS canonical ensemble
(4)$\exp(const \mathrm{Tr} P^2 + {interaction})$
is just the RMT Gaussian ensemble, up to the interaction terms. It might be interesting to discuss the limit in which the interaction terms become neglibile and see what this means in terms of the Pohlmeyer map from gauge theory to single strings.
Incidentally, without the interaction terms we are left with RMT theory which is known to describe chaotic systems. This seems to harmonize nicely with the fact that also in (11d super-) gravity, if the spacetime point interaction is turned off (near a spacelike singularity) the dynamics becomes that of a chaotic billiard.
Somehow it seems that the Pohlmeyer map relates all these matrix theory questions to single strings. How can that be? Can one interpret the KM algebra of 11d supergravity as a current algebra on a worldsheet?
Sorry, this is getting a little too speculative. :-) But it highlights another maybe intersting question:
What is the generalization of the Pohlmeyer invariant to non-trivial backgrounds?
I have mentioned somewhere that whenever we have a free field realization of the worldsheet theory (like on some pp-wave backgrounds) the DDF construction goes through essentially unmodified and hence the Pohlmeyer invariants should be quantizable in such a context, too.
But what if the background is such that the DDF invariants are no longer constructible, or rather, if their respective generalization ceases to have the correct properties needed to relate them to the Pohlmeyer invariants?
In summary: While it is not clear (to me at least) that the Pohlmeyer invariants can help to find (if it really exists) an alternative quantization of the single string, consistent but inequivalent to the standard one, can we still learn something about standard string theory from them?
## April 8, 2004
### Billiards, random matrices, M-theory and all that
#### Posted by Urs Schreiber
I am currently at a seminar on quantum chaos and related stuff. You cannot enjoy meetings like these without knowing and appreciating the Gutzwiller trace formula which tells you how to calculate semiclassical approximations to properties of the spectrum of chaotic quantum systems (like Billiards and particles on spaces of constant negative curvature) by summing over periodic classical paths.
One big puzzle was, and still is to a large extent, why random matrix theory reproduces the predictions obtained by using the Gutzwiller trace formula.
In random matrix theory you pick a Gaussian-like ensemble of matrices (orthogonal, symplectic or unitary ones) and regard each single such matrix as the Hamiltonian operator of some system. It is sort of obvious why this is what one needs for systems which are subject to certain kinds disorder. But apparently nobody has yet understood from a conceptual point of view why it works for single particle systems which are calculated using Gutzwiller’s formula. But there is quite some excitement here that one is at least getting very close to the proof that Gutzwiller does in fact agree with RMT, see
Stefan Heusler, Sebastian Müller, Petr Braun, Fritz Haake, Universal spectral form factor for chaotic dynamics (2004) .
One hasn’t yet understood why this agrees, only that it does so. My hunch is that it has to do with the fact that by a little coarse graining we can describe the classical chaotic paths as random jumps and that the random matrix Hamiltonians are just the amplitude matrices which describe these jumps.
But anyway. ‘Why all this at a string coffe table?’, you might ask.
Well, while hearing the talks I couldn’t help but notice the fact that I actually do know one apparently unrelated but very interesting example of a system which, too, is described both by chaotic billiards as well as by random matrices. This system is - 11 dimensional supergravity.
I had mentioned before the remarkable paper
T. Damour, M. Henneaux , H. Nicolai Cosmological Billiards (2002)
where it is discussed and reviewed how theories of gravity (and in particular of supergravity) close to a spacelike cosmological singularity decouple in the sense that nearby spacetime points become causally disconnected and how that leads to a mini-superspace like dynamics in the presence of effective ‘potential’ walls’ which is essentially nothing but a (chaotic) billiard on a hyperbolic space.
(This paper is actually a nice thing to read while attending a conference where everybody talks about billiards, chaos, coset spaces, symmetric spaces, Weyl chambers and that kind of stuff. )
So 11d supergravity in the limit where interactions become negligible is described by a chaotic billiard just like those people in quantum chaos are very fond of.
But here is the crux: 11d supergravity is also known to be approximated by the BFSS matrix model. Just for reference, this is a system with an ordinary quantum mechanical Hamiltonian
(1)$H = \mathrm{Tr}\left( \frac{1}{2}\dot X^i \dot X_i - \frac{1}{4}\left[X^i,X^j\right] \left[X_i,X_j\right] \right) + {fermionic terms} \,,$
where the $X^i$ are large $N \times N$ matrices that describe D0-branes and their interconnection by strings or, from another point of view, blobs of supermembrane.
Hm, but now let’s again forget about the interaction terms. Then the canonical ensemble of this system is formally that used in random matrix theory!
Am I hallucinating ot does this look suggestive?
I think what I am getting at is the following: Take Damour&Henneaux&Nicolai’s billiard which describes 11d supergravity. Now look at its semiclassic behaviour. It is known that this is governed by random matrix theory (But we have to account for some details, like the fact that the mini-superspace billiard is relativistic. Maybe we have to go to its nonrelativistic limit.) We realize that the weight of the random matrix ensemble is the free kinetic term of the BFSS model. Therefore we might be tempted to speculate that the true ensemble of randowm matrices which is associated with 11d supergravity away from the cosmological singularity is obtained by including the $[X,X]^2$ interaction term of the BFSS Hamiltonian in the weight. With this RMT description in hand, try to find the corresponding billiard motion. Will it coincide with the speculation made by DHN about the higher-order corrections to their mini-superspace dynamics?
In any case, I see that apparently random matrix theory (‘like every good idea in physics’ ;-) has its place in string theory. I should try to learn more about it.
Posted at 12:34 AM UTC | Permalink | Followups (14) |
## Monday, December 31, 2012
### Anime — 2012 in review
A fairly weak year in all, with pretty much no series that grabbed, as can be seen from the gradual slowdown of reviews over the year.
Carrying over from last year, I watched Chihayafuru, and the third season of Natsume Yuujinchou; and on DVD, I started to watch Puella Magi Madoka Magica and House of Five Leaves, but haven't finished either of those yet.
#### Winter season
New Prince of Tennis I watched as a filler, back when we had enough series on the go to watch a couple of episodes a couple of evenings a week, and was mostly harmless.
I'm also still in the middle of the 4th season of Natsume Yuujinchou, which has lost a measure of the charm that it opened with, but will cover that separately when I'm done.
Last, and the best looking of the season's choices, Moretsu Pirates, which started out in surprisingly good form, with our heroine rising to the challenge of an unwanted inheritance, and showing some decent can-do, by teaching Kzinti Lesson 101 to a mysterious ship that had been dogging the school yacht club cruise exercise. But that potential plot line just got dropped, and as we moved onto the actual space pirates, arc after arc followed where any tension or challenge was undercut into anti-climax, as everything had to end up happy-happy and safe. And so a two cour series dragged out until late November.
#### Spring season
Ozma, I reviewed at the time.
Saki: Achiga-hen was a spin-off from the earlier school-girl mahjong series, focusing on a different school team battling their way into the national championship, but with only 12 episodes to date (more to air in the new year) covering from years before the events of the first series into the tournament stage beyond where it had reached, it's way more compressed. Three episodes pass before any games are played, and it's even then most of them are done in fast-forward (so no real display of their magical-girl style abilities), until the final match, where the first game is then spread over the last three episodes.
Everything else is unfinished, may not finish.
In arbitrary order -- Space Brothers had an interesting premise, but a mix of slow, slow pacing and Japanese humour ground me to a halt about ten episodes in. One episode of Kuroko no Basuke was enough so soon after New Prince of Tennis (only so many magical boy anime I can take at a run); Folktales of Japan turned into too much of a good (the same) thing, with two tales out of three seeming to involve a bumpkin stumbling onto some magical fortune, and ne'er-do-well neighbours trying and failing to duplicate the trick; Kids on the Slope was a josei series about jazz and school life in the 1960s, but when it got to be an "A likes B likes C likes D likes..." chain, I stopped caring; and while the art style is interesting, Tsuritama's characters failed to engage me.
#### Summer season
So, I'm in the middle of, and will probably finish Uta Koi, a collection of loosely inspired stories about the Hundred Poems.
Apart from that, I tried an episode of Jinrui wa Suitai Shimashita, which was amusing for starting with the MC saying the terrifying words "I'm from the government, and I'm here to help."; but it laid the satire on without subtlety, and what I saw from threads on /a/, that continued, even beyond the bounds of reason for the apparent setting. So not picked up.
#### Autumn season
Unfortunately, Crunchyroll didn't pick up JoJo's Bizarre Adventures, which was the only title of the season that would have been a "must watch". I did try the first episode of Shin Sekai Yori, but it also failed to excite; and subsequent reactions didn't help, as it became clear that the adaptation was being done on a shoestring budget and was suffering from having pare down the original material hard to fit into the two cour format ("If you'd read the novel, you'd know that..."), so not picked up either.
#### Blast from the past — Neon Genesis Evangelion
So I actually rewatched the series that has been eating up a lot of my fanac time for the last nine years, the original DVDs with the as-aired series and none of the later retcons in the so-called Director's Cut revisions, as I did first time around. The main difference was that I watched with Japanese language and subbed all the way, rather than switching from the Texan dub part-way through, which had the effect, inter alia, of making Asuka less annoying in the episodes before she gets clobbered by the plot.
This is a series where you can't really step into the same river twice even if there are always going to be details that elude recollection. The first watching, unspoiled save knowing that if you were going to watch a giant robot anime, NGE should be it, but that the last two episodes were controversial, meant that everything came as a shock. After years of taking it apart almost frame by frame, with reference to the script in the main, less so.
It still falls into the stages of being Shinji, plus some Rei; then a long arc in the middle where it's really Asuka's story so far as I'm concerned, and only at the end, when nobody else is left standing, does it become Shinji's again. But then as is plain to see, it was Asuka whose plight struck me deeply first time around, and still remains the character that I wished the best for.
And, even knowing the orthodox interpretations, typically based on the DC material, there was nothing that on review shook any of the surprisingly heterodox conclusions I came to, from Asuka's lack of any serious interest in Shini through to the completely democratic nature of the upload-style "Gendo wins" Instrumentality at the end.
#### "Rebuild of Evangelion"
Yeah, the third of the remake movies came out about six weeks ago in Japan. After having been somewhat jaundiced by the first apart from the updated eye-candy, I still haven't watched the second yet. Where I said back then that it looked to be heading into bad fanfic territory based on the "Next Episode Preview", things panned out much that way in the end by all accounts, with all the characters reworked, in the new episode of the Shinji Ikari Show.
So, yeah.
Meanwhile the latest episode, Evangelion:Q seems to have discarded everything from its preceding "Next Episode Preview", and would be a whole new super-robot franchise, were it not that it's actually yet another Shinji movie, and one driven by an idiot plot to boot.
I wouldn't say that I liked the original despite Shinji, but back then he was just a harmless nebbish most of the time, despite his own self-denigration. The revised version from EoE was one who lived down to that evaluation, and this third version seems worse, if anything.
So, unless the concluding episode, Evangelion:||, pulls off something remarkable to redeem the preceding films, I doubt I shall be bothering with those, either.
### 5592.9
is what the bike odo reads now; or just over 2390 miles in the year, 2688 total when adding in all the holiday cycling on hired bikes. Almost 600 miles in the last quarter, and 135 miles in December alone, only about a third of which came from unexpected journeys.
Having made the experiment of cycling home after lunch and working from home on the last working Friday of the year, I might do that again for other winter weeks, if there is good weather. It will help keep me from getting quite so out of condition for the spring.
## Wednesday, December 26, 2012
### Fifty years ago, today...
It snowed, and the snow didn't shift; temperatures staying below zero for more than a month. It was the winter where I got put off drinking milk, because our 1/3 pint bottles of school milk were delivered frozen before school and then jammed under radiators until break, resulting in lukewarm cream with blue icebergs floating in it.
Today by contrast, it is above freezing, and started bright, so I went out for a spin on the bike, passing through the one remaining flooded bit in the vicinity, where a village duckpond has annexed the road nearby; though rain has now set in again.
## Saturday, December 22, 2012
### Wet, wet wet!
It has been chucking it down these last few days, and looks to be continuing that way.
Thursday morning when driving to work, there were a few very large pools on the roads where water was collecting after draining from the surrounding fields, though the Bourn Brook didn't seem to have risen significantly, so I went home at lunch time, to remote work for the rest of the day, by a longer route, to be able to see and avoid water hazards.
Friday's forecast being for a respite, I cycled to work (Monday's forecast showing wet), and there were road closures for flooding around Caldecote then, though they were gone when I headed home mid-afternoon.
Dreaming of a wet Christmas is all very well, though the mild weather means I really need to go mow the lawn, but it's far too wet for that -- or much of the other midwinter tidying that I would normally do over the break.
## Sunday, December 16, 2012
### Powershell 3 -- automatic variable gotcha
PowerShell automatic variables are useful for giving a way in to system state, but oh, how I wish that they had been sigilled rather than looking like user-definable names that one might actually wish to define. And worse when the behaviour of the variable name changes between revisions.
Here's an example that bit me the other day, in the wake of WinRM 3 being pushed out on automatic update into an environment that had previously been .net 4, PowerShell 2 only. It was a script a bit like this (but without the $host reference): In Powershell 2 Major Minor Build Revision ----- ----- ----- -------- 2 0 -1 -1 System.Xml.XmlDocument xml : version="1.0" encoding="utf-8" xml-stylesheet : type="text/xsl" href="path\to\microsoft fxcop 1.36\Xml\FxCopReport.xsl" FxCopReport : FxCopReport But in PowerShell 3 Major Minor Build Revision ----- ----- ----- -------- 3 0 -1 -1 System.Object[] <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="path\to\microsoft fxcop 1.36\Xml\FxCopReport.xsl"?> <FxCopReport ... All because$input is an automatic variable rather than a user definable name -- and, worse, one that has had some semantic change between revisions. Now, if it had been $$input with the extra being a reserved character, this sort of unwitting name clash could never have happened. And there's no real penalty in having to type$$host to get the PowerShell host data, or similar.
## Friday, November 30, 2012
### 43 miles to go
Due to some unforeseen opportunities for cycling. including a wonderful ride by the light of the full moon, to and from dinner out on Wednesday, I'm now closer than I had expected to my stretch goal for the year (making a total 5500 miles on my bike alone in 2.5 years) with a month to go. And with things to do over the weekend, and dry if cold weather, doing the best part of that remaining 43 miles in the next two days seems quite plausible.
Might I have to aim at 5555 rather than 5500? That would be pushing it, trying to fit in just under a hundred miles in the next month.
48 Hours Later: Two thirds of that 43 miles achieved, running errands in cold and dry (except where the sun had thawed things out by Sunday afternoon) conditions.
## Tuesday, November 27, 2012
### .Net under the covers
An interesting little oddity I stumbled across the other day, involving StringBuilder.AppendFormat, and a string which it turned out had had a GUID already expanded into it, and an unexpected exception result
so it looks like the parse for a number is run before checking that the format is even sane; not what one would have expected.
I've actually gotten around to reading something that isn't on-line chat or technical over the last few months, so time for a round-up
#### The Clockwork Rocket by Greg Egan
Having skipped his intervening fantasy Iran novel, the return to physics driven story, albeit with a gimmick given away by a glance at the blurb, seemed worth a go.
Now, half the plot of his earlier Incandescence was how a woman in a pre-industrial society managed to avert an existential threat to her world by being Newton and Einstein rolled into one. Now, stop me if you've heard this one before, but the whole plot of this latest book is how a woman in a pre-industrial society manages to begin to mitigate an existential threat to her world by being Newton and Einstein rolled into one.
When, some chapters in, it became clear that this was indeed what was going on, I just went to the online auctorial physics infodump, because it had failed the "Who are these people, and why should I care?" test. One canon Sue was OK, but this got a bit too much.
#### Rivers of London by Ben Aaronovitch
By contrast, this book is covered with all the warning signs that normally turn me off -- fantasy, urban fantasy, secret wizards...
But there was something that seemed intriguing about it anyway, so I gave it a try. And such a contrast -- or to coin a phrase "That's the way to do it!"
And there are sequels. The second, Moon over Soho benefits from having set the scene already, and gives a chance to thicken the plot. The third, Whispers Underground then veers away from the line set by the first two into a more "secret London" sort of story, with a central mystery that wasn't really the same sort of police procedural sort of mystery that had driven the others.
Still, two out of three ain't bad. I just hope he doesn't just flog this setting to death.
## Thursday, November 08, 2012
### Take a hike, Wiggo.
On the news this morning, there was an item about Bradley Wiggins having been knocked off his bike and rushed to hospital -- which is not a nice thing to happen to anyone. But then they had a clip from him where in the first breath, he was invoking the sledgehammer of State against the victims.
If he'd said "I'm glad I was wearing a helmet, and I think all cyclists should.", that would have been fine; but we know that compulsion results in lowered cycling participation, as indeed does even the perception that you have to dress funny to do it. Certainly, I've cycled more since I decided that choosing headgear for the weather (to shade the back of the neck, to keep the head dry, or warm) made it more appealing than the one size fits all conditions notion of armour that I used to do.
## Sunday, November 04, 2012
### Out of touch?
There has been much waxing lyrical about touch UI in the last few days, from the likes of Jeff Atwood and even Herb Sutter. But somehow I don't feel that I'm in with this particular party.
Let's face it -- touch, just plain touch, has been around since the Apple Newton, and was workaday fare when it was called a Palm Pilot. And back in those days, around the turn of the century, even those 160x160 px screens and 16Mb of memory were good all-around workhorses. I had a couple of the Handspring PDAs, and later one of the Compaq palm-tops, and used them to death -- with the MessagEase keyboard, I could type as fast as I could compose, tapping and dragging the stylus; and I could do easy and accurate cursor navigation and text selection. With the PLua app, I could even code on and for the device (and have a single PalmOS app to prove it, for rolling various dice combinations for role playing games).
Fast-forward almost a decade, and I picked up one of the early and unofficial Android 2.1 tablets (the ViewQuest Slate); and though my skills on the MessagEase keyboard were somewhat rusty, I was still able to type faster than the device could keep up with, despite the ~3 iterations of Moore's law gone past in the interim. And the text selection model, combined with the necessarily fat-fingered capacitative screen, was pretty much unusable. While I could conveniently browse the web while on the khazi and compose terse forum posts, which was a plus over the old PDAs, anything that would involve quoting a section of another post just had to wait until I got onto a real computer.
This year, I've upgraded to an Asus Transformer, currently running JellyBean; and you know what? Despite the quad-core processor, the touch keyboard is still not as responsive as the Handspring one was, and is still way more intrusive. The big bonus to this device, though, is that it has a real physical keyboard and touchpad, which actually lets me select text in a way that isn't intrinsically painful.
So, what is touch good for? Well, even with the keyboard docked, scrolling pages is easier; and the multi-touch zoom makes it easy to expand web-comics so I can read the speech balloons. And I will prod at hyperlinks directly rather than use the pointer via the touchpad (except on really cluttered pages where the pointer precision is indispensable), because the tap-to-activate isn't in my muscle memory at all.
Overall, the device is something that suffices for me as a connection to the Wired when I'm on a cycling holiday, and don't expect to do more than read mail and blogs, and post a bit in the forums (or use it, at pinch, as a camera). But it's really a tool for snacking, rather than cooking -- for example, though I used the Asus while having coffee in bed this morning, I'm now composing this post on my 5 year old laptop running Vista, because it's still simpler to cut and paste hyperlinks that way, and the touchpad plus buttons are just like extensions of my will; and as such, I don't get the urge to reach forward and physically touch the screen, because my intent already has the same effect without conscious intervention, and with greater precision to boot.
And talking the transformation of will into reality with precision, these modern touch devices suck at accessibility. If you are in a state where you are left with just limited use of one hand (and not your even your preferred one at that), a touch device becomes essentially unusable, in a way that mouse plus on-screen keyboard manages to avoid.
Side rant -- one of the things I really, really do not like, even though I now have an Android device with the full Google experience (i.e. I can use Google Play, rather than go to one of the other flaky app depots out there, or scavenge for out of the way DDLs), are websites that detect Android and divert you to a "here's our app" page. No, I don't want your copiously and sulphurously qualified app, I just want your web page written in responsive and universal HTML+CSS+JavaScript, thank you very much -- just the text and the pictures, ta, very much.
## Saturday, November 03, 2012
### Updating the last vsvars32.ps1 you'll ever need for x64
I've been using Chris Tavares' ultimate vsvars32.ps1 in my powershell start-up script pretty much since it was first posted. But of course, it shows its age in that it assumes you're running 32-bit, given the registry path it uses.
Now, you can hard-code based on your hardware, or assume that anything you're using nowadays is 64-bit (except, of course, when you explicitly run the x86 version of powershell); and there are various ways of detecting current bit-ness. But the simplest way of doing things is to take the behaviour-driven style of detection used as standard in JavaScript and just write
or reversing the order of the two registry keys to taste.
There are of course alternative ways of getting the path we're eventually aiming at e.g. via environment variable like VS###COMNTOOLS where the ### is a number like 90, 100 or, nowadays, 110, depending on VS version.
### Milestones
With the end of summer time, and temperatures getting into single figures C most of the time, this weekend it's time to declare the tomatoes and courgettes done, even the ones in the greenhouse, which have mildewed rather than ripen. One last courgette will go into tomorrow's supper as will the last picking of runner beans, and then for veg it will just be the perpetual spinach until the broccoli (last years or this) decide to come on stream, if at all.
We did actually get about a couple of pies' worth off the Bramley in the end, in the form of windfalls that had been concealed in the main foliage -- and there are still a dozen or so fruit on the Charles Ross; so the harvest may just aboue last us through the month, rather than into the new year.
Meanwhile, the car passed 7000 miles on the way home last night.
Later: The runner beans actually were enough for two dinners, so we had the last on Tuesday.
## Sunday, October 28, 2012
### Twenty Five Years On
Wikipedia tells me that the BBC TV series Blackadder III aired from 17 September to 22 October 1987, Thursdays at 9.30. It was one of the very few programs airing I felt inclined to watch that year -- and even so, I forgot to turn the box on for about half the episodes, being on late enough that either we would have already been doing something else after supper, or maybe even thinking about retiring for the night. Or, given that we'd taken a somewhat suspect tree down in the garden of the new house -- fortunately ahead of the not-a-hurricane a few days before the last episode -- it's quite likely that I might been in the garden feeding the smaller branches and twigs (not suitable for firewood) into an incinerator.
And so it would have been about this time, shortly after the series had ended, and I calculated that we'd spent more per program (taking the then current license fee pro-rata) that year than the two of us going to the cinema instead would have cost, that we gave the box the heave-ho.
And that was before home computers of any note, and four years before I even got on the internet at work. As Tim Worstall pointed out today, its opportunity costs like this that, trading off one form of leisure activity against another, that are going to be contributing to an observed trend away from the original idiot box.
## Saturday, October 27, 2012
### End of season cycling
At the end of daylight savings, I've clocked up to 5378.7 miles on my own bike, and thus going over the 270 miles I needed in the quarter to match miles done that way in the whole of last year -- 5500 miles has to be the stretch goal for the year. And a last cycling holiday with CycleBreaks put in 98 miles more, which is going to make for a good total, despite the dreadful first half of the year.
View 25-Oct in a larger map
Shingle Street low tide
On the first day, cool, cloudy, but dry, I aimed for the King's Head at Orford for lunch -- which meant a lot of meandering to delay my arrival until decently after noon. Fortified by sausages and mash, and a couple of pints of Ghost Ship, I then got to Shingle Street soon after 3, and made my way to the water's edge to get my feet wet, before heading back. At ten past four, it was a bit late to stop at Sutton Hoo, but by the time I got back to the point where I could make a quick dash across the A12 to the hotel, it was still too early to want to stop for the day, so I got the OS map out (instead of the custom A4 sheets, folded back about half an inch at the ends to fit in the map cover designed for an OS map to be open in, that I'd been using to that point) for a quick off-piste run through Martlesham and the Bealings.
So, having done almost 60 miles, I felt a bit creaky when getting up from the seat in the bar to go to table, but nowhere as much as after doing 45 miles back in March from the same base. And now with winter closing in, I just have to face losing condition again.
View 26-Oct in a larger map
Sitting in the GAR seat
Friday, rain started while I was at breakfast, and I nearly called it a wash; but by ten, the rain had just about stopped, so I aimed for Framlingham for lunch, with a lesser amount of meandering (accompanied by drizzle for the last few miles) to arrive after the Lemon Tree Bistro would have opened. Sustained by lentil soup, a lamb burger, and a Black Forest sundae, I then headed back to stop at Sutton Hoo for a brief visit, before heading back through Woodbridge -- where I managed to foul up a gear-change on an uphill, taking a while to disentangle the chain from where it caught on the inside of the front chainwheel. So this time I did take the shortcut across the main road.
## Tuesday, October 23, 2012
### Miscellany
Too many distractions and too much to do at work to blog much at the moment, let alone explore recondite bits of software, alas.
Strong winds last week -- though not as strong as 25 years earlier -- blew down the second-flush plums, most of which were perfectly ripe and very tasty.
It was also Jemima's annual trip to the vet -- for which she prepared by catching and eating a pigeon. I'd certainly put the fact that she weighed ~100g more than a year ago onto having a tummy full.
And though it ended up with me lucking out on Friday afternoon on the supposed 20% chance of rain (according to the Met Office forecast that morning), I cycled to the office every relevant day last week, as well as out to dinner on Wednesday evening, and managing 260 miles so far this month, putting all my distance goals in reach by the end of the month, not the end of the year.
## Monday, October 15, 2012
### Nature notes
Despite blackberries in the hedgerows really having come on stream at the end of last month, yesterday, while out picking, I found a patch where buds were still bursting, even this late in the season.
A few nights ago, looking out late at night to check the weather, I noticed movement in the garden below, something shaggy moving in the shadows, not looking like any of the fluffy cats in the neighbourhood, and definitely not a fox (which we do get from time to time out here in the countryside). Then it moved its head into the streetlight, and revealed itself to be a (probably juvenile) badger.
## Sunday, September 30, 2012
### Harvest Home
Picked the last of the (April blossom) plums today -- the best part of a month later than usual; and made a first run for blackberries, ditto late. And while the cherry tomatoes outdoors seem to be coming to the end of their run, the runner beans are still going, albeit slowly, and the greenhouse tomatoes are just about hitting their stride.
The apples on the Charles Ross are getting pecked by birds, but when I've salvaged them, they still taste under-ripe.
The broccoli -- old and new -- were ravaged by caterpillars this month (again way late), so I don't know whether there'll be anything to harvest next year. But then I'm still getting leaves off the one surviving chard plant from last year as well as this year's run of perpetual spinach, so I have no clue about what the garden is doing.
### 5103.9
Well, I managed to surprise myself this quarter. After a fairly wet start, by the middle of this month it started to seem like I might manage to come close to covering 1000 miles on my bike, but with 23.3 miles still to cover on Friday evening, it looked like it might be a bit tight. But with fair warm, if breezy weather on Saturday, I could take a detour on the way home from town to do some gratuitous miles, and go over the target to about 1006 miles.
Then an unexpected excuse arose for another dash into town today for some shopping I'd missed yesterday, and suddenly 2^10 miles went from being an insane stretch goal, to feasible, to achieved, thanks to a side-trip to pick blackberries, at a total of 1025.7 for the quarter, not counting the miles on a hire bike, back in July; and 1901.5 -- 2000 miles would have been beyond insane as a goal -- for the year (or 2103 all-in).
That leaves me just under 160 miles to match last year's total, and 270 to match the miles done excluding hired transport. With a few weeks of cyclable season left, the former seems achieveable, and the latter a plausible stretch,
## Monday, September 03, 2012
#### Humanity's Fire Trilogy by Michael Cobley
This starts off with a bait-and-switch, the solar system under attack by a swarm of machine intelligences as last-ditch starship launches try to escape the plague. Then fast-forward to a time when, after the temporary inconvenience, Earth is a minor power in a well populated part of the galaxy, and has rediscovered one of the colonies of that desperate exodus. Which just happens to be where a previous cycle of spacefaring civilisation fought the last major battle.
In all, it's harmless spaceship fiction, with a moderately novel cosmology of sedimenting layers of hyperspace, and a heavy seasoning of tropes from cyberpunk onwards; though the ending suddenly arrives in a cloud of anti-climax.
Despite the title, there's no Campbell style "humans are the best" -- the human colonies are more the people who are in the wrong place at the wrong time, with the major blows against the various antagonist factions coming from other sources.
Verdict: entertaining enough for me to have picked up a volume per year as it came out in paperback, but not a classic.
#### Helix by Eric Brown
His Kethani was harmless enough, and I'm a sucker for Big Dumb Object SF, so I picked this up as part of my stash of holiday reading.
Oh dear. The first chapter sets up a greentarded future a century or so hence, which somehow manages to combine a disease and CAGW ravaged humanity down to the tens of millions with a viable long-shot starship program. The second chapter flips to humans in fursuits in some frozen environment, suffering under a stock model Church. In the third, the starship, 1000 years out, traveling at half the speed of light is a parsec from a gravity well -- then a few moments later it crash lands on an apparent planetary surface.
At this point I put this travesty down, having given it three strikes and it's out, with just one afterthought -- if you're running a 1000 year last-chance colony mission when the human race is dying out due to transient ecological issues, aim for somewhere that it likely to be habitable a millennium hence, by doing a simple cometary orbit to the Kuiper Belt, and expect homeostasis to have reasserted itself by then.
## Saturday, September 01, 2012
### Once More with feeling
And it's time for the Cambridge Film Festival again, this blog's original raison d'être. So, I checked out the features program on-line.
It may be the way that the puff-pieces are written, but once again I'm underwhelmed, and am left thinking in some cases "Why would anyone make that film?". Can't see myself making the effort to go to any of them.
Eastwood's Unforgiven, which I saw on late night TV while on holiday back in May will probably be the only film I see this year at this rate.
### 4697.3
So 619 miles on my bike since the end of June, or 746 in total -- meaning nearly 400 miles in August. So, I should push the 1000 miles comfortably this month, all being well, and can see how the last quarter of the year shapes up.
## Tuesday, August 28, 2012
### Fortunate Fall
This morning, presumably as a result of Saturday's drenching, my bike was squeaking a lot on the ride to work. By the time I got there, I'd just about pinned it down to "something in the front wheel".
I didn't get around to doing anything to it at lunchtime, so it was still squeaking on the way home. But, at a T-junction on the cycle path, I wanted to turn right, while another cyclist coming from the blind corner to the left also wanted to turn right -- result a low-velocity collision as our front wheels met.
We ascertained that no damage had been done, and cycled off. A hundred yards or so later, I realized that the front wheel was now running quietly again! So, no need to try haphazard lubrication and hope.
## Sunday, August 26, 2012
### Three hot days and a thunderstorm
Or so goes the traditional English summer.
After the hot weekend, and a couple of dull days, the weather brightened up so I could take Karen out, first to the exhibition of Han grave goods at the Fitz (weds), then to the Crown and Castle for lunch (Thu) where we sat out on the terrace until the haze came up and the wind strengthened (as we were moving on to coffee), and again on Friday, when Karen's mother came to visit and we went out for lunch, and again sat outside at the Plough at Coton.
But then Saturday...
It sprinkled a bit around 10-ish when I wandered out to do a bit of shopping, and a bit heavier burst after I got back; and it looked fair and the radar tracking the weather suggesting it would miss us, so I headed off to the Plough at Fen Ditton for the CAMDUG meet-up. And for a while thereafter, we sat and chatted out on the riverbank in the sun -- until suddenly all went dark.
We got inside ahead of the rain, which then tipped down in stair rods with hail for an extended period, before brightening up.
So then time to cycle home again; and by the time I got past the Elizabeth Way bridge, it has started sprinkling, and carried on that way until I had gone through Cambridge -- at which point the heavens opened again, so it was just head down and pedal, being glad of just having silk shirt and lycra shorts, which were reasonably comfortable when saturated, even though the rain was now cold (unlike a couple of weeks ago when I got similarly caught in torrential rain on the way home from work). Fortunately I got home ahead of the encore with hail.
This morning, looking at how much the pond had been refilled, it seemed like we'd had well over an inch of rain in total.
### Third time lucky
After two previous attempts were at least partially a wash-out, this year's late August cycling holiday in Norfolk went without rain.
Arriving at the King's Head soon after midday on the second day (temperatures pushing 30) of the one really hot weekend we've had this year, I headed up to the Gin Trap at Ringstead for lunch, then out to Burnham Market along the back lane that's marked as a cycle route.
View 19 August 2012 in a larger map
Following a shandy at the Nelson 1805 (formerly the Jockey), I headed back, deciding on a different route, through the narrow lanes to Syderstone. I didn't stop at the sign saying "Unsuitable for motor vehicles", but carried on into where it became unmade. The first bit was loose stone, so I needed to push, and later there were stretches of dry loose sand, which were for going barefoot and pushing -- though in the late afternoon heat, and without air cooling, I was dripping with sweat on the pushing sections.
I didn't quite make it to the main road unscathed -- just a few yards short, there was a pool of silty mud across the whole track, which clung to the tyres as I pushed, and oozed between the toes.
Back at base, despite the two refreshment stops, I sank a couple of bottles of mineral water; and then the same again plus a pint of Spitfire over dinner.
View 20 August 2012 in a larger map
The next day was duller and cooler; and the route I took through North Creake aimed to do in reverse the path through Holkham Hall I did last year, only this time there was no garden produce for the picnic lunch on the coastal path towards Wells. Indeed everything was being late -- not even blackberries were ripe for picking; mostly still having flower buds opening.
Blackberries way behind schedule (blossom and buds way over-exposed)
Holkham Hall
With the hour advancing, by the time I was heading south from Wells, it was too late for looking for pubs, except if they'd be open all day, so I just took an easy way back.
View 21 August 2012 in a larger map
The last day's ride was aimed again for a Gin Trap lunch, going to Sandringham, following the cycle route to Sedgeford, then detouring out to Heacham, using the cycle paths to Hunstanton, thence to Ringstead.
The last bit went cross country, taking an early turn from the cycle route towards Docking (passing the one blackberry bush I saw all weekend with ripening berries), then the road to Fring where I hardly had to pedal once (and then only because of a gusty south-westerly), before taking a final loop around the (for once, dry) green lane to the west of Great Bircham; totalling about 95 miles in a hotter and thus less energetic holiday than last month.
And then home, with a token few sprinkles of rain on the windscreen on the way.
## Friday, August 17, 2012
### The prodigal returns
Yoko, our female Tonkinese, managed a disappearing act on Sunday; and was not sitting waiting crossly to be let in either that evening or the next morning. The lack of corpses by the roadside gave hope, but there was no response to leafleting the neighbours.
Then today, she reappeared, fast asleep on an armchair, not even announcing her return with her usual wailing.
## Wednesday, August 15, 2012
### Building a stand-alone scalalib.dll for .net convenience
Following up on the previous post -- making a single dll scala runtime goes as follows
2. Copy the contents of the bin subfolder into a folder .\scala-out
3. Copy scalalib.dll and forkjoin.dll into that same .\scala-out folder
4. Install ILMerge if not already available
5. Run this command (assuming powershell prompt)
& 'C:\Program Files\Microsoft\ILMerge\ILMerge.exe' /closed /allowDup /t:library /targetplatform:"v4,c:\windows\Microsoft.NET\Framework\v4.0.30319" /out:scalalib.dll .\scala-out\scalalib.dll .\scala-out\forkjoin.dll .\scala-out\IKVM.OpenJDK.Charsets.dll .\scala-out\IKVM.Runtime.JNI.dll .\scala-out\IKVM.OpenJDK.Text.dll .\scala-out\IKVM.OpenJDK.Beans.dll .\scala-out\IKVM.OpenJDK.XML.API.dll .\scala-out\IKVM.Reflection.dll .\scala-out\IKVM.Runtime.dll .\scala-out\IKVM.OpenJDK.Management.dll .\scala-out\IKVM.OpenJDK.Corba.dll .\scala-out\IKVM.OpenJDK.Core.dll
and wait until done (takes ~3GB memory, so 64-bit systems only)
6. Observe that some of those referenced assemblies aren't actually in the IKVM subset that the scalacompiler.exe drop bundles
7. Build helloworld.exe as before and co-locate it with the new 33Mb scalalib.dll that resulted from all the grinding
8. It just works™
Hopefully the real deal will bundle something like this -- ideally also strong-named (but hey, you can do that with Mono.Cecil to rewrite that file and the assemblies you link against it).
P.S. If you have .net 4.5 on your machine, change the target platform as indicated here.
PP.S. The runtime has a dependency on the native code ikvm-native-win32-x64.dll and ikvm-native-win32-x86.dll from the IKVM download, though I'm not sure where from the depths of IKVM you may hit those.
### Hello Scala.net
Belatedly spotting the March '12 update to the Scala.net story, a very brief "Hello, World"
Build with
path\to\scala-bin\scalacompiler.exe -d path\to\output -target:exe -Ystruct-dispatch:no-cache -Xassem-name HelloWorld.exe -Xassem-extdirs path\to\scala-bin -Xshow-class HelloWorld -Xassem-path "C:\Windows\Microsoft.NET\Framework\v4.0.30319\System.Windows.Forms.dll" hw.scala
where giving "." as path\to\output will put the new executable in the same folder as the scala compiler.
You now need to copy all the dlls (apart from mscorlib and the other System.* files) out of that folder into your output path folder. Then
path\to\output\HelloWorld.exe
and, suddenly:
Still slightly painful to actually perform the build, but definitely getting there. Still needs a stress-test.
## Monday, August 13, 2012
### Congenial
Spent the weekend at this years BRS con, with Friday being an "on my bike day" so I could sample the Real Ale Bar, the other days playing chauffeur for Karen, and running other errands.
As usual, it was a good chance to meet people I'd not seen in ages -- including Paul Mason, who had managed a visit back from Japan, to talk about (or point and laugh about) how academia views the fan phenomenon.
And as usual, it made me think, maybe I should dust off something from the cupboard and try doing this thing again, even if only just once every other year. Maybe some HeroQuest or something similar. But then the moment passes.
### Trivia
So, while sitting on the drive at the anniversary of acquisition,the car odo was reading 6454 miles, or 2980 miles on the year -- thanks to last year's dry autumn, and the usual run of snow days (and no thanks to the spring weather).
Talking of which, the fruit trees seem to have counted April-May as a second winter, because they have had a second run of blossom in the last few weeks:
Baby plums
Apple blossom
The plums at least managed to avoid the worst of the late frosts, and if the belated warm weather keeps up, might end up ripening a decent crop -- but at current indications that will be after the bank holiday weekend, the usual peak (and unlike last year when they were peaking about now); but the apples have been hit -- the Bramley has but a handful of fruit that look like they'll never get big enough to use; though the Charles Ross has a smaller than usual crop of smaller than usual fruit, but will at least not be a complete write-off.
## Friday, August 03, 2012
Between holiday and just everyday cycling with a purpose (unlike the gratuitous getting miles in on rare dry evenings in June), I managed 350 miles last month, which is more like it.
Last night on the way home through Bourn, it was just starting to spit with rain, with a strong southerly wind in my face, so I was barely over 20mph going through the 30 limit (as opposed to the more usual barely legal), but just kept on pedalling to get up the far side of the valley an keep ahead of the weather.
Then a car pulled out, and promptly go stuck behind another slow cyclist grinding up the hill -- so nothing for it but to keep pouring on the coal, and overtake the both of them. I was gasping for breath on the next up out of the village, but it was satisfying.
And I managed to get clear of the path of that batch of weather and carried on dry all the way home.
## Saturday, July 14, 2012
### Cycling holiday
Despite a weather forecast which started with a sunny-ish day (rain by evening) and then wet thereafter, I managed 127 miles over 3 days, with only 2 of those miles being wet.
The holiday was the Cyclebreaks Heritage Coast tour, amended to two nights at Westleton, on account of the Halesworth Latitude festival taking over the hotels there.
View 12-Jul-2012 in a larger map
Fearing that the weather would make this the only day for distance, I made 10am start from base, stopping for a sandwich at the Eat Anglia café next door to the Earl Soham Brewery (beer from next door on tap -- so it might as well be the brewery tap from that point of view). Thus fortified, I took part of the route up to Halesworth, and ad-libbed my way from there, suddenly recognising as I turned south at Blyford that I'd cycled in the opposite direction on previous tours ending in Southwold for the night.
The A12 was full of slow-moving traffic when I arrived there, so it was a bit of a wait to cross, but then no problems making the way to Westleton, and a very nice meal of ham hock terrine, and beef and onion suet pudding, washed down with the Summer Dream elderflower ale from Green Jack Brewery.
It was pleasing that this time, after 45 miles, while my legs were starting to tire, I was otherwise perfectly OK that evening -- unlike three months earlier, when doing the same distance around Ipswich had me creaking and hobbling. Even with the awful weather, I have at least managed to get back in condition for the season.
View 13-Jul-2012 in a larger map
Friday was dull, but apart from a few flurries of rain late morning, dry (except for underfoot). The route was almost the planned one, though I extended a mile or so further north on the marked cycle route, going through Brampton. Lunch at the Blue Lighthouse in Southwold (skewered king prawns, and treacle tart), a little walk along the promenade, and a paddle in the very chilly sea (nobody was swimming), then back on the bike, for the journey back via Walberswick and the muddy off-road section where I came a cropper last time in dry weather. This time forewarned, there was just a lot of barefoot wading while carrying the bike, both on that stretch, and later -- after a pause for a pint at the Ship in Dunwich -- on the bridleway between Dunwich Heath and Westleton.
Dinner was chicken liver with black truffle pate, macaroni cheese, and what they called peanut butter cheesecake, but was more like a peanutty mousse.
View 14-Jul-2012 in a larger map
It was raining when I woke up, but by 9, the rain radar was showing the rain had moved to the south, with a few laggard patches remaining in its wake. So rather than going south through Wickham Market and then across, I decided to go west and not turn south until the last minute. I did run into one of those laggard patches of rain for some of the route to the A12 at Darsham, but then on, it was dry from above.
Scarecrow at Denham
Although I knew it was likely to be a bad choice, I decided on the minor route between Heveningham and Laxfield, and was greeted with water running across the road as I came into the former, then soon after going past Ubbeston, there was a hundred yards or so where the road was running ankle-deep with river, and finally a similarly deep puddle at the junction at Banyard's Green; but from then on it was dry, and by the time I came to the White Horse on the A140 at Thornham, even trying to be sunny.
From there, heading south from Finningham, the roads suddenly started to be wet again, suggesting that rain had been through very recently, vindicating the choice of route. And after a pause for a coffee at base, the rain started again as I began the drive home.
## Monday, July 02, 2012
### Strong-naming assemblies using Mono.Cecil
As the Microsoft FxCop libraries are inherently 32-bit (including native code as they do), developing code using them on a 64-bit platform throws up places where the modes get mixed, and an assembly that is generally AnyCPU ends up needing to load an x86 library and barfs. No real problem here, use corflags assembly /32BIT+ /Force, where you need the /Force for a strong-named assembly. And then if later you need that one strong-named... There is the standard technique of ILDasm/ILAsm to rebuild the assembly with strong-naming (as documented e.g. here), but you still end up with the corflags yellow warning in the MSBuild output about breaking the original strong-naming if you need that too.
But when the code I was working on also uses Mono.Cecil to do stuff, it was easier to silently do the whole lot in one script:
Yes, you can do that in PowerShell too, just with more fussing about the path to the Cecil assemblies because of the schizophrenic current directory model it has. And because the rest of the code in this particular project is F#, keeping the build scripts in the same language is just natural.
### Dept of "should know better"
Cycling home from doing a little bit of shopping on Saturday morning, I was coming up to a junction where the road makes a nearly blind left turn at the point where another road joins from the right, with a traffic island in the mouth of the joining road (so a T-junction with altered priorities). Checking the mirror, there was a good gap behind me to another cyclist and then a couple of cars.
So arm out to indicate, and move over to near the white line... When suddenly the cyclist passes me, cuts in in front, then blithely bounces out again, entering the wrong side of the island. And never at any time did his hands leave his front tiller -- a piece of kit I really don't think should be road legal. Now, I'm used to the normal run of urban cyclist who usually has an arm down by one side and indicates by a twitch of an index finger or maybe a flex of the wrist; but this chap was too goddamn precious to make even that attempt.
I, of course, advised him of his unfamiliarity with the general rules of the road, but it irks me that someone supposedly serious about it should be such a bad advertisement for cycling as a mode of transport.
### 4078.2
In the end, the last week of June turned out to be good for cycling -- apart from the one day when I ended up cycling home in rain that hadn't been on the Met Office forecast. I can forgive getting showers wrong, but more organised weather less so -- like on Saturday just gone when they said cloudy for the morning when there was a blatant band of rain visible on the radar (The Weather Outlook is always wet and gloomy, so if I went by them I'd not have cycled at all).
So, over the easy target, and, with the cycling holiday in March, 950 miles for the half year, which is good considering we had a couple of months of almost total washout, above and beyond the normal April showers.
And now July opens wet again...
## Wednesday, June 20, 2012
### Anime — Leiji Matsumoto's OZMA
This short series had all the look and feel of something restored from the 1980s and texture-mapped onto contemporary CGI for the vehicles traveling on -- or in some cases under -- the desert sands of a future devastated world (which seemed to be the setting for at least two out of three titles from back in the day).
The gimmick this one had is that we had craft that would use quantum magic to submerge, to give an excuse for doing all the standard submarine things (though I guess that should be subharene for things traveling under sand) -- running silent, being depth-charged, like every submarine movie ever.
The passage of time did mean that a lot of the old clichés get to be new again from disuse and as a pure nostalgia binge succeeds on those terms; but did we really need a Char-alike masked antagonist, and an ending that borrowed from Nausicaä the way that Greece has borrowed from the bond markets?
### Cycling progress
The continuing wet weather is playing havoc with my attempts to pile in the miles like I did last year. We've had what seem to be the only two days this week that won't be a wash-out; and one of those is a work-from-home day around taking Karen to exercise class.
Currently my odo stands at 3915.6, plus the 75 miles I did on a hire bike back in March. So I have comfortably -- by 80 miles or so -- made the easy target of 2000 miles since the end of last June (i.e. the 2nd year with the odo), most of that with help from the dry warm autumn we had last year. Doing another 85 miles to roll over the 4000 in the next 10 days is looking less and less likely each time I see a weather forecast. And getting from 788 miles so far this year to the 1000 looks right out.
Let's hope we have a fine autumn again this year.
### Win32, RAII and C++ lambdas
Back when I was last writing serious amounts of C++ for pay (best part of a decade ago), I would end up with a whole bunch of little classes like this one:
to wrap those fiddly Win32 handle types with RAII behaviour; all much the same, most only being instantiated once, the only real difference being the contained type (which could be templated away) and the destructor behaviour (which isn't so easy).
With modern C++, you could wrap a pointer to the handle in a smart pointer with appropriate custom deleter. Or, avoiding the extra indirection from having to track the HANDLE and a smart-HANDLE*, take a leaf from how shared_ptr is implemented and do:
which directly holds a value (expected to be something pointer-like, like the Win32 HWHATEVERs) with RAII semantics; then in the code:
and apply the same wrapper to different types, injecting the deleter as a lambda:
## Wednesday, June 06, 2012
### Anime — The New Prince of Tennis
Something of a make-weight series, picked to give us an even number of series on the go; a sequel to a long-running series I'd not watched. But sports anime (even if this has about as much to do with tennis as Saki does mah-jong) are usually at least harmless tosh that don't involve hopelessly herbivore high-schoolers and their romance woes.
And on that level, this short series did not disappoint.
It started off simply enough -- a bunch of middle-school players joining a high-school summer camp, and showing their magical-boy tennis prowess against lads two or three years their senior. And then out of the blue, after a bout of sudden death elimination contests, it's suddenly mountain climbing, survival training and crawling through laser defence system like something out of Mission Impossible, until our heroes win enough karma to go back and face the top high-school squad.
Verdict -- did what it said on the tin.
### That British obsession
We expected it to be wet for the Bank Holiday/Jubilee weekend, because it always is, especially after the best part of a fortnight of warm dry weather. But now that's over, you might think that we were going to be back to seasonal sort of weather.
Alas, we have a positively autumnal storm with plenty of wind and rain on the way for the next couple of days. So who knows when I'll next be able to get out on the bike.
## Monday, June 04, 2012
### The Prohibitionists are at it again
Saw this story in the Daily Wail while on holiday:
Don't drink more than quarter of a pint a DAY: Oxford study claims slashing the official alcohol limit would save 4,500 lives a year
Checking figure on the 'net, with a UK population of 62.3 million and average life expectancy of 80.5, that means 774,000 deaths/year. Subtracting 4500, gives a resultant life expectancy of a whisker under 81 years.
So, the killjoys might not make you live noticeably longer -- but you'd just not enjoy it so much (if the way that this contradicts all earlier findings actually holds up).
And that's before considering the cost/benefit analysis -- drinkers provide more revenue and on average cost less in lifetime medical costs anyway.
## Sunday, June 03, 2012
### Another sea-side holiday
Guessing in the early New Year that the weather might be getting warmer by the end of May, we booked again to take a week at Netley Waterside -- and despite some headlines mid-month saying that the cold weather would last at least another month, we did get the good weather, but without it being too hot and bright for driving into.
This time, nothing essential got left behind (though I did realise that my strategy for wet weather was not to be caught out in it), and swinging around well west of Reading and (thanks to a closure on the A340 near Aldermaston) Basingstoke too, the drive down was not too stressy, despite the nigh-constant sulking of the satnav when we declined to head for the nearest motorway -- and we got to see all the stationary traffic on the M1 while doing so.
From our window
The theme for the week was cars and boats and planes; not so immediately interesting as last time's wildlife focus, so we sat out most of the trips, and instead did expeditions at our own pace back to places we had gone last year, only in better weather, and without having to spend the best part of 2 hours in loading and unloading.
Monday we went back to Marwell Wildlife
and were able to go all around, and see most of the talks, including the penguin feeding, at what would have been queueing up to go on the coach time.
The one organized trip we did go on was to the Beaulieu Motor Museum, somewhere I'd not been for the best part of 50 years, and which Karen had never seen. So there was a lot of stuff that was either not yet built, or still in active service back then.
The centrepiece of the museum this year is a Bond in Motion display covering 50 years of cars and other transport gadgets:
Even the cars have stunt doubles...
Wednesday we took a rest, and just went down the road for a pub lunch, and then Thursday we went back to the Andover Hawk Conservancy, where again the flexibility of going by ourselves -- and the dry weather this time -- meant that we got to see all the displays, and not just the mass flights of kites and vultures. Even if the fishing eagle managed to splash down every time, rather than skimming its food off the surface of the little pond in the display area.
African Pygmy Owl -- as seen in ソ•ラ•ノ•ヲ•ト
And then to top-and tail the week, I went for a few exploratory walks, extending beyond where I'd done much less ambitious wandering last year:
View Netley Walks in a larger map
### 3812.2
So, with a late spell of warm dry weather, I managed to do a bit over 200 miles on the bike this month, and pick up the start of a sun-tan; though that would have been closer to 300 had I not gone on a non-cycling holiday last week.
So, it's a sporting chance to push the odo over the 4000 mile mark this month, but given the wet start to the proceedings (and not looking good for fitting in a long Bank Holiday bike ride), doing a full 1000 miles by the end of the half-year is probably a non-starter.
### The Narcissist-in-Chief strikes again
Going on four years ago, I had strong vibes from the US presidential elections that the country was being swept by a "Things can only get better..." mood so reminiscent of what we had seen a decade or so before. After a couple of years, my thoughts on progress so far could be roughly expressed as
But even Cyclops never managed some of the not-fit-for-purpose stunts we've seen from the White House in recent months in terms of violating operational security. First leaking details of deep cover operatives in Yemen, and now boasting about Stuxnet.
I have no words.
## Thursday, May 31, 2012
### Observation notes
Venus having been a bright evening star all spring, I caught a last glimpse at 21:36 on Saturday, courtesy of clear skies and an unobstructed horizon, nine days before it transits, and 15 minutes before spotting Vega as the first real star of the night. Subsequently the fine weather has been tending to leave cloud in the west at evening, before socking in completely now.
## Thursday, May 17, 2012
### C# under the covers II
An initialisation problem I hit recently, a null pointer exception when initialising an object via an initialiser statement. This program is a simplified example --
which throws when initializing b.
Putting them together like this, it's obvious what I missed out in the second initializer : I'm not creating a new object to assign. But it's not obvious what the second case is actually doing, even though it compiles -- which in itself initially surprised me. So let's look at the IL and find out what that initialisation is actually doing...
which actually resolves to something like
So if we add a constructor and a sensible ToString to the type being initialised, thus
we see that the initialisation of a replaces the constructed values; whereas b adds to them.
Not sure how useful this is as the dictionary literal form only works in initializers like this, but it's another bit of living and learning.
## Wednesday, May 16, 2012
### Anime — Chihayafuru
Looking at last Autumn's season listing, this was the one title to really stand out, the one I hoped against hope that would get licensed -- Madhouse escaping from their Marvel phase, adapting a josei sports manga, where the sport is karuta, a curiously Japanese affair, involving memorizing the 100 poems, and capturing cards based on recognising the one bearing the closing lines of the poem being read out.
After a very brief current day set-up, the story flashes back from high-school to junior school, where Chihaya Ayase befriends Arata, the shy transfer student with the regional dialect being bullied by the rest of her class. He introduces her to the game -- she is mesmerized by the way he flicks the cards and such speed, and hooked by finding that one of the poems begins with her name. Both of them play at school and in the local club, and eventually become friends with fellow player and classmate Taichi, until Arata moves away with his family.
Taichi and Arata
Surprisingly enough, the skip forward to high-school does not end up presenting us with a love triangle with occasional card games -- there are more important matters, like recruiting enough players to get an official school club and team, and then it's off to the major tournaments, with Chihaya aiming to become the overall womens' champion or Queen -- the best in Japan, and thus the best in the world. Only this is not Saki (although the first passing encounter with the current Queen is reminiscent of the entry of one of the monsters from that series); older and wiser players can take advantage of Chihaya's over-impulsive speed, she can become unsure of herself, and not adapt to the person she is playing.
Facing off against the Queen
And between all the play, there are the relationships between the bunch of misfits that have been recruited to make up the team, the team with families and other players, and also finding out what happened to Arata -- once tipped to become the Master -- to make him give up the game in the intervening years. And so we end, a year after the start, everyone gearing up to do better in the next season. The manga continues, the anime, well it would be nice to dream.
The production of the series is definitely good Madhouse; and the character design subtle -- characters can be plain, or plump, or middle-aged, and the ones who are meant to be pretty do stand out. The story never becomes mawkish, nor does it sacrifice everything for romance with a flimsy excuse to get the principals together. Very definitely, anime of the year, 2011. Available in many regions on Crunchyroll.
## Saturday, May 05, 2012
### The Galloway Effect
So, local elections, and the LibDem councillor of 17 years is standing down on health grounds. The election communication comes out with an introduction to the anointed successor, and isn't it wonderful what the parliamentary party has achieved.
Then suddenly, the first active canvassing I can recall for at least those 17 years, and maybe more -- phone calls, doorstepping, the full works. Then another more locally focussed election communication with a lot of "Please don't vote for the independent candidate" -- the only other horse in the race (I think I saw one Conservative poster the whole time).
So, results now in, 516 for the independent (a local character), 514 for the Libs, the rest down in the noise under 100, and a total turnout of under 1200 (which I would be surprised to find was much over 25%).
Maybe not as decisive as Galloway's win in Bradford last month, but a bit of a kick against complacency (though I don't think that Andrew Lansley has any cause for concern as yet).
## Wednesday, May 02, 2012
### Anime — Mobile Suit Zeta Gundam
One of the major series that are set in the same continuity as the original, but to my mind rather the weaker of the two. Where the original had the main characters caught up in actual warfare, doing what they could to escape to safety, and then becoming caught up as an irregular unit in the larger conflict, this is the story of a boy named Sue Kamille; and it all starts when a soldier (of the now rather fascist Earth Federation) makes fun of his name. So, he decides to sneak onto the base and cause mayhem with one of the new Mark II Gundam models; and gets out thanks to the fortunate timing of a raid by spacer guerillas.
So we reprise some of the structure of the previous series -- battles in space, then down to Earth for a while (meeting more of the old characters) before getting back into space, bouncing around the Moon and colonies before a final battle. And like last time, there's a somewhat fae girl whom the hero falls for, and who of course dies fighting for the other side, and a set of annoying kids aboard the battleship (purely gratuitously). But this time, rather than just "Federation good, Zeon bad", there is an attempt at some more Byzantine politics, with various Zeon successor groups being introduced, though often in a way that seemed to suggest that you should have known about them all along (maybe you should have read/seen something else in the canon?).
While there are a number of stronger female characters who turn their coats, or at least seriously consider it, they also get above and beyond the usual Kill 'em All Tomino treatment. And then the final battle -- they saved up all the character deaths, including at least one who'd been on borrowed time for several episodes; and after that ... it just stopped.
No aftermath, no epilogue, nothing.
In all, a series to be watched because it gets quoted so often, but being somewhat more ambitious than its predecessor, it also needs to be judged more harshly.
#### Bonus feature — Mobile Suit Gundam Wing
One of the alternate canon Gundam series, with the same Earth vs Colonies backdrop; but this time it kicks off with five superpowered delinquents being dropped to Earth in their own overpowered Gundams to #occupy the place, fighting each other as often as the Earth military. Oh, and one of them somehow is attending a school where the oujo-sama (daughter of some diplomat) showcased in the ED sequence goes, so he can threaten to kill her (which gets her all hot for him).
Meanwhile, there is a Char-alike following his own agenda, and military bases in Africa which appear to have the same racial mix as a Democrat party campaign HQ...
Dropped after five episodes.
## Tuesday, May 01, 2012
### 3601.6
Bluebells at the start of the month
So despite a very wet April (albeit one that opened and ended with warm, sunny spring days), I managed 130 miles on the bike. Need to do twice as much each for the next two months to hit my target; and May isn't looking too good (but that's Bank Holidays for you).
More typical weather
## Sunday, April 08, 2012
### Anime -- 2011 in review (part 2)
Continued from last year
One of the potentially interesting things that happened last year was ANN's launch of the anime-on-demand streaming site, which opted not to have the advertising supported option that Crunchyroll does. Alas, the player they used still stuttered and juddered (losing a few seconds' video), and often crashed each time there would have been a commercial break -- and also had a splash page telling us that the advertising revenue supported the original creators.
And, whatever else you might say about Crunchy, at least they announce what they've got early, release on a timely basis, rather than leaving it a month after a series aired to offer the last three episodes, or a month into the quarter before putting up a single new series.
So, what did I watch from last year... A lot of duds, mostly.
#### Winter season
Hourou Musuko, another adaptation from the same mangaka as Aoi Hana, was dull, worthy, and totally blew my suspension of disbelief in how it depicted 12 year old kids (having been one once, I sorta know about that). Dropped after 5 episodes.
Gosick started as a Ruritanian fantasy -- a Japanese boy studying in a school in some 1920s central European principality meets a mysterious girl who seems to live at the top of the library tower and solves mysteries. As is the way of these things, when it was episodic, it was fine, but then it grew a plot with little foreshadowing, and was suddenly about alchemists and the murder of the late Princess and court politics, which the schoolboy protagonist had no sight of. And the last couple of episodes just went over the top, culminating in major fast-forwarding with no real sense of what was going on. Japan just doesn't into endings, again.
Level E was the best series of the quarter, nearly the best of the year. A sort of Men in Black, the anime, following the arrival of a trickster alien prince on Earth along with handlers striving desperately to avoid setting off an interstellar incident. For once, a humorous series was actually funny.
#### Spring season
Tiger & Bunny -- corporate sponsored superheroes in sorta-New York, only with Japanese corporate advertising over their costumes; and played as a buddy movie with long-time hero Wild Tiger and rookie Barnaby "Bunny" Brooks Jr. as the main team, plus a supporting cast of other heroes, all competing for points in live-televised crime busting. It reached a sort of climax about halfway through when a major menace with some apparent connection to Bunny's mysterious past was defeated, and then was still noodling around when a-o-d ran out on me when I'd only got to the 3/4 mark, and I really didn't miss it.
Battle Girls : Time Paradox is not Sengoku Basara with girls, but is alas much sillier. Dropped after 4 episodes. If only Hyouge Mono had been picked up instead.
Steins;Gate was another early drop -- annoying characters, and the prospect of it turning into an alternate reality multiple choice harem were enough.
[C] : The Money of Soul and Possibility Control -- a somewhat topical short "shout out your attacks" battle series with a financial theme. Generally harmless fluff trying to be a bit serious; made a little bit unsettling by watching when it seemed like there might be imminent waves of [C] emanating from the Southern European Financial District.
#### Summer season
Natsume Yuujincho San -- haven't finished this yet.
Yuru Yuri -- insubstantial. One episode was more than enough
Kamisama Dolls -- the preview looked like it might be Narutaru v2, but alas all that bit seemed to be over in the first 2/3 of the first episode, and it looked like settling down into sophomore hijinks what with an annoying little sister and a Haruhi-wannabe science club president. Dropped after 3 episodes.
The Mystic Archives of Dantalian looked a bit like Gosick at the outset -- mid 1920s setting in England, where ex-RFC pilot Hugh Anthony Disward comes into his inheritance and finds the ancestral pile comes with goth-loli biblioprincess in amongst the mouldering tomes. He is refreshingly competent in their episodic adventures with Forbidden Books That Should Not Be, as one might expect of a gentleman adventurer.
Alas, when the last episodes finally came through, the final two turned out to be a multi-parter that would introduce the Plot. And then it ended, unresolved, in a rather rushed fashion.
#### Autumn season
Fate/Zero -- I saw the first double length episode and was underwhelmed.
Chihayafuru -- haven't finished this yet, but anime of the year.
Un-Go -- a sort of Ghost in the Shell plus future disaster stricken Japan setting, using old mystery story plots in a modern guise. The first three episodes were OK, but not enough to make me want to go out and pay a quarter's a-o-d subscription for by itself, when I was watching about as much as I could fit in from Crunchyroll anyway.
## Saturday, April 07, 2012
### Playing with (almost) the latest C++ features -- groundwork
Having had my interest in playing with native code reawoken by the new C++11 features, the first thing I went to look at was portability. One of the advantages of managed code -- JVM or CLR -- is that the VM handles portability for you and the code can be built pretty much anywhere; with native code we have to see what the compilers have in common.
I've been using VC++2010 on Windows as having many of the "big rocks" for the new standard, while being backward compatible onto OS versions before Win7 (unlike the VC++11 compiler and its runtime); and for *nix-like systems, I have cygwin and debian squeeze... Well the distro support for these is a bit behind-hand (gcc versions 4.5 and 4.4 respectively), whereas gcc 4.7 is now out with quite a broad coverage of the new standard. While 4.5 has a good chunk of the new stuff, 4.4 doesn't -- in particular, it doesn't have the new lambda syntax. So, it's build from source time to get an upgrade to a sensible version there, which means I might as well go to the latest and greatest on both platforms...
#### Building gcc 4.7 from source
##### debian
Fortunately there are some handy instructions out there which can be used as a baseline for debian. Following them, I found that I needed to tweak how I built GMP to fit PPL's requirements, by modifying the configure step to be:
CPPFLAGS=-fexceptions ../../sources/gmp-5.0.4/configure --prefix=$PROJECT_DIR/output/ --enable-cxx before PPL would go through happily. The --enable-cxx is required for the PPL ./configure stage to run through, the CPPFLAGS=-fexceptions is optional, but it avoids a make-time warning about possible unwanted runtime behaviours if you don't. ClooG also needed a CPPFLAGS=-I$PROJECT_DIR/output/include CFLAGS=-L$PROJECT_DIR/output/lib on the configure line to find GMP. In the gcc build, as well as pointing at ../../sources/gcc-4.7.0/configure it's also worth taking the advice from the MacOS build instructions and only selecting languages of interest to you i.e. to play with new C/C++ there's no need for Java or Fortran, at a considerable saving in time. Then it's a matter of just adding soft links from whichever g*-4.7 files to the unadorned versions, and export LD_LIBRARY_PATH=~/gcc-4.7/output/lib/ export PATH=~/gcc-4.7/output/bin:$PATH
to your .bashrc or equivalent
##### cygwin
Cygwin is more fun -- I've not yet managed to get that to build all the way through with the loop optimization libraries. When you get to PPL, you find you also need to go back and re-configure GMP with --disable-static --enable-shared, as explained in the friendly manual, to build the shared library version. However then when building gcc, we get a mismatch with the earlier libraries in the configure stage, where it just stops:
checking for the correct version of gmp.h... yes
checking for the correct version of mpfr.h... yes
checking for the correct version of mpc.h... yes
checking for the correct version of the gmp/mpfr/mpc libraries... no
It is possible that if you start by building PPL and CLooG with shared library GMP in a first pass, then build the rest starting with reconfiguring and building a static GMP it will work, but life is too short. The MacOS build instructions didn't use the PPL/ClooG/graphite libraries either, so we can do this to configure gcc instead:
$../../sources/gcc-4.7.0/configure \ > --prefix=$PROJECT_DIR/output/ \
> --with-gmp=$PROJECT_DIR/output/ \ > --with-mpfr=$PROJECT_DIR/output/ \
> --with-mpc=$PROJECT_DIR/output/ \ > --program-suffix=-4.7 \ > --without-ppl --without-cloog \ > --enable-languages=c,c++ which sits and cooks for quite some time to get you the new compiler build. #### Linkbait: fixing cygwin "mkdir foo mkdir: cannot create directory foo': Permission denied" I got into a state where I had this error, which other people have seen, after having tried to trash the build output of one of the failed PPL/CLooG attempts from Windows Explorer, where anywhere under in my home directory and down it was rejecting mkdir with "mkdir foo mkdir: cannot create directory foo': Permission denied". Having spotted a tangentially related mailing list message about this sort of problem happening on network shares and there being ACL related, I tried the following and it worked to clear things up: 1. Start a PowerShell console as Administrator window 2. Run Get-Acl a folder (like /tmp) which you can mkdir in in cygwin (this will be %cygwin_root%\tmp where %cygwin_root% is where you installed cygwin) $acl = Get-Acl C:\cygwin\tmp
3. In Windows Explorer set yourself Full Control on all the affected folders -- %cygwin_root%\home and %cygwin_root%\home\%USERNAME% at least
4. In the PowerShell, Set-Acl on each folder you've just frobbed with the saved ACL object
Set-Acl C:\cygwin\home \$acl
Note that the Set-Acl call may take considerable time (tens of seconds) to execute when it gets to the really problematic node and has to roll permissions down.
#### SCons -- Death to makefiles
Since I last did native code seriously at home (c. year 2000), I had discovered the very handy MiniCppUnit tool as a nice light-weight unit testing framework, so of course I went and fetched a copy to be going on with. This time, curiosity prompted me to wonder "WTF is this SConstruct file anyway?" and now when I opened it, I immediately recognised that it was some form of Python script, and wondered what sort of Python based make system this might be.
It was simple enough to find where it came from -- http://www.scons.org/ -- and looking at the user guide, I felt that it is much more intuitive system than makefiles (admittedly there's not a high barrier there), and far less cluttered than declarative XML based systems like Ant or MSBuild; so I'll be using that for my *nix builds -- it works very nicely for doing things like running unit tests as part of the build e.g.
#### MiniCppUnit -- Building it under modern C++
Just like I had to patch it to build in C++/CLI, I needed to make some changes to MiniCppUnit to get it to build clean under VC++2010, the out-of-the-box gcc 4.5 on cygwin 1.7.x and gcc 4.7 debian squeeze with -Wall -std=gnu++0x or -Wall -std=c++11 on respectively. First MiniCppUnit.hxx:
92c92
< #if _MSC_VER < 1300
---
> #if defined(_MSC_VER) && _MSC_VER < 1300
93a94
> /* Without the "defined(_MSC_VER) &&" this code gets included when building on cygwin 1.7.x with gcc 4.5.3 at least */
204c205
< static void assertTrue(char* strExpression, bool expression,
---
> static void assertTrue(const char* strExpression, bool expression,
207c208
< static void assertTrueMissatge(char* strExpression, bool expression,
---
> static void assertTrueMissatge(const char* strExpression, bool expression,
304c305
< catch ( TestFailedException& failure) //just for skiping current test case
---
> catch ( TestFailedException& /*failure*/) //just for skiping current test case
and the corresponding signature change in the .cxx file:
108c108
< void Assert::assertTrue(char* strExpression, bool expression,
---
> void Assert::assertTrue(const char* strExpression, bool expression,
122c122
< void Assert::assertTrueMissatge(char* strExpression, bool expression,
---
> void Assert::assertTrueMissatge(const char* strExpression, bool expression,
Of course there may be other things lurking to be scared out when I ramp up my standard warning levels to beyond the misleadingly named -Wall (when you have -Wextra, formerly -W, provided to switch on a whole bunch more including spotting signed/unsigned comparisons, before getting onto the really specialized ones) and switch on -Werror to force everything to be really clean. |
# Lottery: What is the probabilityof getting three pairs in 6 draws?
Let's assume we have a lottery with $49$ balls. We draw six of them without putting them back and sort them afterwards. E.g. one possible result is ($1$,$4$,$7$,$8$,$30$,$49$).
Obviously we have $\frac{49!}{(49-6)!6!}$ different outcomes.
How many of them contain three pairs?(e.g. 2,3,7,8,15,16)
Our approach so far:
We tried to simplify this by transforming our result $(x_1,x_2,...,x_6), 1 <= x_1 < x_2 < ... < x_6 <= 49$
to
$(x_1,x_2+1,x_3+2,...,x_6+5), 1<=x_1<=x_2<=...<=x_6<=44$.
Example: $(3,5,6,20,36,49) \Rightarrow (3,4,4,17,32,44)$.
Now we can count a pair for each $x_i = x_j$. If we draw a simple graph now and add the probabilities, we get a result that doesn't match the provided solution of $\frac{43}{45402}$.
Are we on the right track? If so, where is our error?
-
If you get $1,2,3,4,5,6$, how many pairs do you consider you have? – Patrick Li Nov 5 '12 at 0:54
@PatrickLi: I would say three. As I read it, after sorting, you get three pairs iff $x_1+1=x_2, x_3+1=x_4, x_5+1=x_6$ – Ross Millikan Nov 5 '12 at 1:12
A very naive approach: the number of ways to get three pairs is ${48 \choose 3}$, because you can choose three balls out of the first $48$ and then pick the one after each. This ignores problems of overlap, so is not quite right. Dividing by ${49 \choose 6}$ possibilities gives $\approx 0.124\%$ – Ross Millikan Nov 5 '12 at 1:18
@PatrickLi: Zero. We only have a pair if the neighbors are not present. e.g. $1,2,3,6,7,12$ is a single pair. – foobarfoox Nov 5 '12 at 2:15
Then you are looking for the number of ways to select three balls out of the first 48, with the restriction that if you pick ball $i$ you can pick neither $i+1$, nor $i+2$. Add the one above each of the three and you are there. A naive approach is there are 48 choices for the first, 43 for the next, and 38 for the last, divided by 6 for the orders. This gives $\approx 0.00935\%$. It is low because at the ends we don't lose the full 5 choices. – Ross Millikan Nov 5 '12 at 2:46
show 1 more comment |
For in GOD we live, and move, and have our being. - Acts 17:28
The Joy of a Teacher is the Success of his Students. - Samuel Chukwuemeka
# Fractions, Ratios, and Proportions
Calculators for Fractions
For ACT Students
The ACT is a timed exam...$60$ questions for $60$ minutes
This implies that you have to solve each question in one minute.
Some questions will typically take less than a minute a solve.
Some questions will typically take more than a minute to solve.
The goal is to maximize your time. You use the time saved on those questions you solved in less than a minute, to solve the questions that will take more than a minute.
So, you should try to solve each question correctly and timely.
So, it is not just solving a question correctly, but solving it correctly on time.
Please ensure you attempt all ACT questions.
There is no "negative" penalty for any wrong answer.
For JAMB and CMAT Students
Calculators are not allowed. So, the questions are solved in a way that does not require a calculator.
Solve all questions.
Use at least two (two or more) methods whenever applicable.
Show all work.
For all my applicable students: Calculators ARE NOT allowed for all questions.
(1.) CSEC Using a calculator, or otherwise, calculate the EXACT value of:
$\dfrac{3\dfrac{1}{2} * 1\dfrac{2}{3}}{4\dfrac{1}{5}} \\[5ex]$
$\dfrac{3\dfrac{1}{2} * 1\dfrac{2}{3}}{4\dfrac{1}{5}} \\[10ex] \underline{Numerator} \\[3ex] 3\dfrac{1}{2} = \dfrac{2 * 3 + 1}{2} = \dfrac{6 + 1}{2} = \dfrac{7}{2} \\[5ex] 1\dfrac{2}{3} = \dfrac{3 * 1 + 2}{3} = \dfrac{3 + 2}{3} = \dfrac{5}{3} \\[5ex] 3\dfrac{1}{2} * 1\dfrac{2}{3} \\[5ex] = \dfrac{7}{2} * \dfrac{5}{3} \\[5ex] = \dfrac{7 * 5}{2 * 3} \\[5ex] = \dfrac{35}{6} \\[5ex] \underline{Denominator} \\[3ex] 4\dfrac{1}{5} = \dfrac{5 * 4 + 1}{5} = \dfrac{20 + 1}{5} = \dfrac{21}{5} \\[5ex] \underline{Entire\:\:Question} \\[3ex] Quotient = \dfrac{35}{6} \div \dfrac{21}{5} \\[5ex] = \dfrac{35}{6} * \dfrac{5}{21} \\[5ex] = \dfrac{35 * 5}{6 * 21} \\[5ex] = \dfrac{5 * 5}{6 * 3} \\[5ex] = \dfrac{25}{18} \\[5ex] = 1\dfrac{7}{18}$
(2.) ACT What is the least common denominator of the fractions
$\dfrac{4}{21}, \dfrac{1}{12},\:\:and\:\:\dfrac{3}{8} \\[5ex] F.\:\: 56 \\[3ex] G.\:\: 168 \\[3ex] H.\:\: 252 \\[3ex] J.\:\: 672 \\[3ex] K.\:\: 2,016 \\[3ex]$
The colors besides red indicate the common factors that should be counted only one time.
Begin with them in the multiplication for the LCD.
Then, include the rest.
$Denominators = 21, 12, 8 \\[3ex] 8 = \color{darkblue}{2} * \color{purple}{2} * 2 \\[3ex] 12 = \color{darkblue}{2} * \color{purple}{2} * \color{black}{3} \\[3ex] 21 = \color{black}{3} * 7 \\[3ex] LCD = \color{darkblue}{2} * \color{purple}{2} * \color{black}{3} * 2 * 7 \\[3ex] LCD = 168$
(3.) CSEC Using a calculator, or otherwise, calculate
$5\dfrac{1}{2} \div 3\dfrac{2}{3} + 1\dfrac{4}{5} \\[5ex]$ giving your answer as a fraction in its lowest terms
$PEMDAS \\[3ex] 5\dfrac{1}{2} \div 3\dfrac{2}{3} + 1\dfrac{4}{5} \\[7ex] 5\dfrac{1}{2} = \dfrac{2 * 5 + 1}{2} = \dfrac{10 + 1}{2} = \dfrac{11}{2} \\[5ex] 3\dfrac{2}{3} = \dfrac{3 * 3 + 2}{3} = \dfrac{9 + 2}{3} = \dfrac{11}{3} \\[5ex] 1\dfrac{4}{5} = \dfrac{5 * 1 + 4}{5} = \dfrac{5 + 4}{5} = \dfrac{9}{5} \\[5ex] 5\dfrac{1}{2} \div 3\dfrac{2}{3} \\[5ex] = \dfrac{11}{2} \div \dfrac{11}{3} \\[5ex] = \dfrac{11}{2} * \dfrac{3}{11} \\[5ex] = \dfrac{3}{2} \\[5ex] = 1\dfrac{1}{2} \\[5ex] 1\dfrac{1}{2} + 1\dfrac{4}{5} \\[5ex] We\:\:can\:\:solve\:\:in\:\:two\:\:ways \\[3ex] Use\:\:any\:\:method\:\:you\:\:prefer \\[3ex] \underline{First\:\:Method} \\[3ex] Fractions:\:\: \dfrac{1}{2} + \dfrac{4}{5} \\[5ex] LCD = 10 \\[3ex] = \dfrac{5}{10} + \dfrac{8}{10} \\[5ex] = \dfrac{5 + 8}{10} \\[5ex] = \dfrac{13}{10} \\[5ex] = 1\dfrac{3}{10} \\[5ex] Integers:\:\: 1 + 1 = 2 \\[3ex] Answer:\:\: = 2 + 1\dfrac{3}{10} = 3\dfrac{3}{10} \\[5ex] \therefore 5\dfrac{1}{2} \div 3\dfrac{2}{3} + 1\dfrac{4}{5} = 3\dfrac{3}{10} \\[5ex] \underline{Second\:\:Method} \\[3ex] 1\dfrac{1}{2} + 1\dfrac{4}{5} \\[5ex] = \dfrac{3}{2} + \dfrac{9}{5} \\[5ex] Denominators = 2, 5 \\[3ex] LCD = 10 \\[3ex] = \dfrac{15}{10} + \dfrac{18}{10} \\[5ex] = \dfrac{15 + 18}{10} \\[5ex] = \dfrac{33}{10} \\[5ex] = 3\dfrac{3}{10} \\[5ex] \therefore 5\dfrac{1}{2} \div 3\dfrac{2}{3} + 1\dfrac{4}{5} = 3\dfrac{3}{10}$
(4.) CSEC Evaluate $\dfrac{\dfrac{1}{2} * \dfrac{3}{5}}{3\dfrac{1}{2}}$, giving your answer as fraction in its lowest terms.
$\dfrac{\dfrac{1}{2} * \dfrac{3}{5}}{3\dfrac{1}{2}} \\[10ex] \underline{Numerator} \\[3ex] \dfrac{1}{2} * \dfrac{3}{5} \\[5ex] = \dfrac{1 * 3}{2 * 5} \\[5ex] = \dfrac{3}{10} \\[5ex] \underline{Denominator} \\[3ex] 3\dfrac{1}{2} = \dfrac{2 * 3 + 1}{2} = \dfrac{6 + 1}{2} = \dfrac{7}{2} \\[5ex] \underline{Entire\:\:Question} \\[3ex] Quotient = \dfrac{3}{10} \div \dfrac{7}{2} \\[5ex] = \dfrac{3}{10} * \dfrac{2}{7} \\[5ex] = \dfrac{3}{5} * \dfrac{1}{7} \\[5ex] = \dfrac{3 * 1}{5 * 7} \\[5ex] = \dfrac{3}{35}$
(5.) ACT
$\dfrac{1}{4} * \dfrac{2}{5} * \dfrac{3}{6} * \dfrac{4}{7} * \dfrac{5}{8} * \dfrac{6}{9} * \dfrac{7}{n} = ? \\[5ex] F.\:\: 1 \\[3ex] G.\:\: \dfrac{1}{n} \\[5ex] H.\:\: \dfrac{1}{12n} \\[5ex] J.\:\: \dfrac{2}{9n} \\[5ex] K.\:\: \dfrac{6}{17n} \\[5ex]$
$\dfrac{1}{4} * \dfrac{2}{5} * \dfrac{3}{6} * \dfrac{4}{7} * \dfrac{5}{8} * \dfrac{6}{9} * \dfrac{7}{n} \\[5ex] = \dfrac{1}{2} * \dfrac{1}{1} * \dfrac{1}{2} * \dfrac{1}{1} * \dfrac{1}{1} * \dfrac{1}{3} * \dfrac{1}{n} \\[5ex] = \dfrac{1 * 1 * 1 * 1 * 1 * 1 * 1}{2 * 1 * 2 * 1 * 1 * 3 * n} \\[5ex] = \dfrac{1}{12n}$
(6.) ACT What is the least common denominator of the fractions
$\dfrac{4}{21}, \dfrac{1}{24},\:\:and\:\:\dfrac{3}{16} \\[5ex] F.\:\: 112 \\[3ex] G.\:\: 336 \\[3ex] H.\:\: 504 \\[3ex] J.\:\: 2,688 \\[3ex] K.\:\: 8,064 \\[3ex]$
The colors besides red indicate the common factors that should be counted only one time.
Begin with them in the multiplication for the LCD.
Then, include the rest.
$Denominators = 21, 24, 16 \\[3ex] 21 = \color{black}{3} * 7 \\[3ex] 24 = \color{darkblue}{2} * \color{purple}{2} * \color{grey}{2} * \color{black}{3} \\[3ex] 16 = \color{darkblue}{2} * \color{purple}{2} * \color{grey}{2} * 2 \\[3ex] LCD = \color{black}{3} * \color{darkblue}{2} * \color{purple}{2} * \color{grey}{2} * 7 * 2 \\[3ex] LCD = 336$
(7.) CSEC Using a calculator, or otherwise, determine the exact value of
$\dfrac{3\dfrac{1}{3} - 2\dfrac{3}{5}}{2\dfrac{1}{5}} \\[7ex]$
$\dfrac{3\dfrac{1}{3} - 2\dfrac{3}{5}}{2\dfrac{1}{5}} \\[10ex] \underline{Numerator} \\[3ex] 3\dfrac{1}{3} - 2\dfrac{3}{5} \\[5ex] \underline{First\:\:Method} \\[3ex] Fractions:\:\: \dfrac{1}{3} - \dfrac{3}{5} \\[3ex] LCD = 15 \\[3ex] = \dfrac{5}{15} - \dfrac{9}{15} \\[5ex] = \dfrac{5 - 9}{15} \\[5ex] = -\dfrac{4}{15}...Not\:\:what\:\:we\:\:want \\[5ex] Borrow\:\:1\:\:from\:\:the\:\:Integer, 3 \\[3ex] Remaining:\:\: 3 - 1 = 2 \\[3ex] Add\:\:that\:\:1\:\:to\:\:\dfrac{1}{3} \\[5ex] 1 + \dfrac{1}{3} \\[5ex] = \dfrac{3}{3} + \dfrac{1}{3} \\[5ex] = \dfrac{3 + 1}{3} \\[5ex] = \dfrac{4}{3} \\[5ex] Fractions\:\:again:\:\: \dfrac{4}{3} - \dfrac{3}{5} \\[5ex] LCD = 15 \\[3ex] = \dfrac{20}{15} - \dfrac{9}{15} \\[5ex] = \dfrac{20 - 9}{15} \\[5ex] = \dfrac{11}{15} \\[5ex] Integers:\:\: 2 - 2 = 0 \\[3ex] Difference = 0 + \dfrac{11}{15} = \dfrac{11}{15} \\[5ex] \underline{Second\:\:Method} \\[3ex] 3\dfrac{1}{3} = \dfrac{3 * 3 + 1}{3} = \dfrac{9 + 1}{3} = \dfrac{10}{3} \\[5ex] 2\dfrac{3}{5} = \dfrac{5 * 2 + 3}{5} = \dfrac{10 + 3}{5} = \dfrac{13}{5} \\[5ex] \dfrac{10}{3} - \dfrac{13}{5} \\[5ex] LCD = 15 \\[3ex] = \dfrac{50}{15} - \dfrac{39}{15} \\[5ex] = \dfrac{50 - 39}{15} \\[5ex] = \dfrac{11}{15} \\[5ex] \underline{Denominator} \\[3ex] 2\dfrac{1}{5} = \dfrac{5 * 2 + 1}{5} = \dfrac{10 + 1}{5} = \dfrac{11}{5} \\[5ex] \underline{Entire\:\:Question} \\[3ex] Quotient = \dfrac{11}{15} \div \dfrac{11}{5} \\[5ex] = \dfrac{11}{15} * \dfrac{5}{11} \\[5ex] = \dfrac{1}{3}$
(8.) CSEC Using a calculator, or otherwise, calculate the exact value of
$\dfrac{2\dfrac{1}{4} * \dfrac{4}{5}}{\dfrac{3}{5} - \dfrac{1}{2}} \\[5ex]$
$\dfrac{2\dfrac{1}{4} * \dfrac{4}{5}}{\dfrac{3}{5} - \dfrac{1}{2}} \\[10ex] \underline{Numerator} \\[3ex] 2\dfrac{1}{4} = \dfrac{4 * 2 + 1}{4} = \dfrac{8 + 1}{4} = \dfrac{9}{4} \\[5ex] \dfrac{9}{4} * \dfrac{4}{5} = \dfrac{9}{5} \\[5ex] \underline{Denominator} \\[3ex] \dfrac{3}{5} - \dfrac{1}{2} \\[5ex] = \dfrac{6}{10} - \dfrac{5}{10} \\[5ex] = \dfrac{6 - 5}{10} \\[5ex] = \dfrac{1}{10} \\[5ex] \underline{Entire\:\:Question} \\[3ex] \dfrac{9}{5} \div \dfrac{1}{10} \\[5ex] = \dfrac{9}{5} * \dfrac{10}{1} \\[5ex] = \dfrac{9}{1} * \dfrac{2}{1} \\[5ex] = 9 * 2 \\[3ex] = 18$
(9.) WASSCE Simplify:
$\dfrac{1\dfrac{7}{8} * 2\dfrac{2}{5}}{6\dfrac{3}{4} \div \dfrac{3}{4}} \\[7ex] A.\:\: 9 \\[3ex] B.\:\: 4\dfrac{1}{2} \\[5ex] C.\:\: 2 \\[3ex] D.\:\: \dfrac{1}{2} \\[5ex]$
$\dfrac{1\dfrac{7}{8} * 2\dfrac{2}{5}}{6\dfrac{3}{4} \div \dfrac{3}{4}} \\[10ex] \underline{Numerator} \\[3ex] 1\dfrac{7}{8} = \dfrac{8 * 1 + 7}{8} = \dfrac{8 + 7}{8} = \dfrac{15}{8} \\[5ex] 2\dfrac{2}{5} = \dfrac{5 * 2 + 2}{5} = \dfrac{10 + 2}{5} = \dfrac{12}{5} \\[5ex] \dfrac{15}{8} * \dfrac{12}{5} \\[5ex] = \dfrac{3}{2} * \dfrac{3}{1} \\[5ex] = \dfrac{3 * 3}{2 * 1} \\[5ex] = \dfrac{9}{2} \\[5ex] \underline{Denominator} \\[3ex] 6\dfrac{3}{4} = \dfrac{4 * 6 + 3}{4} = \dfrac{24 + 3}{4} = \dfrac{27}{4} \\[5ex] \dfrac{27}{4} \div \dfrac{3}{4} \\[5ex] = \dfrac{27}{4} * \dfrac{4}{3} \\[5ex] = \dfrac{9}{1} * \dfrac{1}{1} \\[5ex] = 9 * 1 \\[3ex] = 9 \\[3ex] \underline{Entire\:\:Question} \\[3ex] \dfrac{9}{2} \div 9 \\[5ex] = \dfrac{9}{2} \div \dfrac{9}{1} \\[5ex] =\dfrac{9}{2} * \dfrac{1}{9} \\[5ex] = \dfrac{1}{2} * \dfrac{1}{1} \\[5ex] = \dfrac{1}{2}$
(10.) ACT For all real numbers $x$ such that $x \ne 0$, $\dfrac{4}{5} + \dfrac{7}{x} = ?$
$A.\:\: \dfrac{11}{5x} \\[5ex] B.\:\: \dfrac{28}{5x} \\[5ex] C.\:\: \dfrac{11}{5 + x} \\[5ex] D.\:\: \dfrac{7x + 20}{5 + x} \\[5ex] E.\:\: \dfrac{4x + 35}{5x} \\[5ex]$
$\dfrac{4}{5} + \dfrac{7}{x} \\[5ex] LCD\:\:of\:\:5\:\:and\:\:x = 5 * x = 5x \\[3ex] = \dfrac{4x}{5x} + \dfrac{35}{5x} \\[5ex] = \dfrac{4x + 35}{5x}$
(11.) CSEC Using a calculator, or otherwise, determine the EXACT value of:
$2\dfrac{2}{5} - 1\dfrac{1}{3} + 3\dfrac{1}{2} \\[5ex]$
We can solve this question in two ways.
Use any method you prefer.
$2\dfrac{2}{5} - 1\dfrac{1}{3} + 3\dfrac{1}{2} \\[7ex] PEMDAS \\[3ex] \underline{First\:\:Method} \\[3ex] Fractions:\:\: \dfrac{2}{5} - \dfrac{1}{3} + \dfrac{1}{2} \\[5ex] LCD = 30 \\[3ex] = \dfrac{12}{30} - \dfrac{10}{30} + \dfrac{15}{30} \\[5ex] = \dfrac{12 - 10 + 15}{30} \\[5ex] = \dfrac{17}{30} \\[5ex] Integers:\:\: 2 - 1 + 3 = 4 \\[3ex] Answer:\:\: = 4 + \dfrac{17}{30} = 4\dfrac{17}{30} \\[5ex] \therefore 2\dfrac{2}{5} - 1\dfrac{1}{3} + 3\dfrac{1}{2} = 4\dfrac{17}{30} \\[5ex] \underline{Second\:\:Method} \\[3ex] 2\dfrac{2}{5} = \dfrac{5 * 2 + 2}{5} = \dfrac{10 + 2}{5} = \dfrac{12}{5} \\[5ex] 1\dfrac{1}{3} = \dfrac{3 * 1 + 1}{3} = \dfrac{3 + 1}{3} = \dfrac{4}{3} \\[5ex] 3\dfrac{1}{2} = \dfrac{2 * 3 + 1}{2} = \dfrac{6 + 1}{2} = \dfrac{7}{2} \\[5ex] \dfrac{12}{5} - \dfrac{4}{3} + \dfrac{7}{2} \\[5ex] LCD = 30 \\[3ex] = \dfrac{72}{30} - \dfrac{40}{30} + \dfrac{105}{30} \\[5ex] = \dfrac{72 - 40 + 105}{30} \\[5ex] = \dfrac{137}{30} \\[5ex] = 4\dfrac{17}{30} \\[5ex] \therefore 2\dfrac{2}{5} - 1\dfrac{1}{3} + 3\dfrac{1}{2} = 4\dfrac{17}{30}$
(12.) CSEC Evaluate $\dfrac{\dfrac{1}{2} * \dfrac{3}{5}}{3\dfrac{1}{2}}$, giving your answer as fraction in its lowest terms.
$\dfrac{\dfrac{1}{2} * \dfrac{3}{5}}{3\dfrac{1}{2}} \\[10ex] \underline{Numerator} \\[3ex] \dfrac{1}{2} * \dfrac{3}{5} \\[5ex] = \dfrac{1 * 3}{2 * 5} \\[5ex] = \dfrac{3}{10} \\[5ex] \underline{Denominator} \\[3ex] 3\dfrac{1}{2} = \dfrac{2 * 3 + 1}{2} = \dfrac{6 + 1}{2} = \dfrac{7}{2} \\[5ex] \underline{Entire\:\:Question} \\[3ex] Quotient = \dfrac{3}{10} \div \dfrac{7}{2} \\[5ex] = \dfrac{3}{10} * \dfrac{2}{7} \\[5ex] = \dfrac{3}{5} * \dfrac{1}{7} \\[5ex] = \dfrac{3 * 1}{5 * 7} \\[5ex] = \dfrac{3}{35}$
(13.) CSEC Calculate the EXACT value of
$\left(1\dfrac{3}{4} - \dfrac{1}{8}\right) + \left(\dfrac{5}{6} \div \dfrac{2}{3}\right) \\[5ex]$
$PEMDAS \\[3ex] \underline{First\:\:Part} \\[3ex] \left(1\dfrac{3}{4} - \dfrac{1}{8}\right) \\[5ex] 1\dfrac{3}{4} = \dfrac{4 * 1 + 3}{4} = \dfrac{4 + 3}{4} = \dfrac{7}{4} \\[5ex] = \dfrac{7}{4} - \dfrac{1}{8} \\[5ex] LCD = 8 \\[3ex] = \dfrac{14}{8} - \dfrac{1}{8} \\[5ex] = \dfrac{14 - 1}{8} \\[5ex] = \dfrac{13}{8} \\[5ex] \underline{Second\:\:Part} \\[3ex] \dfrac{5}{6} \div \dfrac{2}{3} \\[5ex] = \dfrac{5}{6} * \dfrac{3}{2} \\[5ex] = \dfrac{5}{2} * \dfrac{1}{2} \\[5ex] = \dfrac{5 * 1}{2 * 2} \\[5ex] = \dfrac{5}{4} \\[5ex] \underline{First\:\:Part + Second\:\:Part} \\[3ex] \dfrac{13}{8} + \dfrac{5}{4} \\[5ex] LCD = 8 \\[3ex] = \dfrac{13}{8} + \dfrac{10}{8} \\[5ex] = \dfrac{13 + 10}{8} \\[5ex] = \dfrac{23}{8} \\[5ex] = 2\dfrac{7}{8}$
(14.) ACT $8\%$ of $60$ is $\dfrac{1}{5}$ of what number?
$A.\:\: 0.96 \\[3ex] B.\:\: 12 \\[3ex] C.\:\: 24 \\[3ex] D.\:\: 240 \\[3ex] E.\:\: 3,750 \\[3ex]$
$8\%\:\:of\:\:60 \\[3ex] = \dfrac{8}{100} * 60 \\[5ex] = \dfrac{4}{5} * 6 \\[5ex] = \dfrac{24}{5} \\[5ex] \rightarrow \dfrac{24}{5}\:\:is\:\:\dfrac{1}{5}\:\:of\:\:what\:\:number \\[5ex] Let\:\:the\:\:number = n \\[3ex] \dfrac{24}{5} = \dfrac{1}{5} * n \\[5ex] Denominators\:\:are\:\:the\:\:same \\[3ex] Equate\:\:the\:\:numerators \\[3ex] 24 = 1n \\[3ex] n = 24$
(15.) ACT What is the least common denominator of the fractions $\dfrac{4}{35}$, $\dfrac{1}{77}$, and $\dfrac{3}{22}$
$A.\:\: 110 \\[3ex] B.\:\: 770 \\[3ex] C.\:\: 2,695 \\[3ex] D.\:\: 8,470 \\[3ex] E.\:\: 59,290 \\[3ex]$
The colors besides red indicate the common factors that should be counted only one time.
Begin with them in the multiplication for the LCD.
Then, include the rest.
$Denominators = 35, 77, 22 \\[3ex] 35 = 5 * \color{black}{7} \\[3ex] 77 = \color{black}{7} * \color{darkblue}{11} \\[3ex] 22 = 2 * \color{darkblue}{11} \\[3ex] LCD = \color{black}{7} * \color{darkblue}{11} * 5 * 2 \\[3ex] LCD = 770$
(16.) ACT What is the least common multiple of $50$, $30$, and $70?$
$F.\:\: 50 \\[3ex] G.\:\: 105 \\[3ex] H.\:\: 150 \\[3ex] J.\:\: 1,050 \\[3ex] K.\:\: 105,000 \\[3ex]$
The colors besides red indicate the common factors that should be counted only one time.
Begin with them in the multiplication for the LCD.
Then, include the rest.
$Numbers = 50, 30, 70 \\[3ex] 50 = \color{black}{2} * \color{darkblue}{5} * 5 \\[3ex] 30 = \color{black}{2} * 3 * \color{darkblue}{5} \\[3ex] 70 = \color{black}{2} * \color{darkblue}{5} * 7 \\[3ex] LCM = \color{black}{2} * \color{darkblue}{5} * 5 * 3 * 7 \\[3ex] LCM = 1,050$
(17.) JAMB Simplify $3\dfrac{1}{2} - \left(2\dfrac{1}{3} * 1\dfrac{1}{4}\right) + \dfrac{3}{5}$
$A.\:\: 2\dfrac{11}{60} \\[5ex] B.\:\: 2\dfrac{1}{60} \\[5ex] C.\:\: 1\dfrac{11}{60} \\[5ex] D.\:\: 1\dfrac{1}{60} \\[5ex]$
$PEMDAS \\[3ex] 2\dfrac{1}{3} = \dfrac{3 * 2 + 1}{3} = \dfrac{6 + 1}{3} = \dfrac{7}{3} \\[5ex] 1\dfrac{1}{4} = \dfrac{4 * 1 + 1}{4} = \dfrac{4 + 1}{4} = \dfrac{5}{4} \\[5ex] 2\dfrac{1}{3} * 1\dfrac{1}{4} \\[5ex] = \dfrac{7}{3} * \dfrac{5}{4} \\[5ex] = \dfrac{35}{12} \\[5ex] 3\dfrac{1}{2} = \dfrac{2 * 3 + 1}{2} = \dfrac{6 + 1}{2} = \dfrac{7}{2} \\[5ex] 3\dfrac{1}{2} - \left(2\dfrac{1}{3} * 1\dfrac{1}{4}\right) + \dfrac{3}{5} \\[5ex] = \dfrac{7}{2} - \dfrac{35}{12} + \dfrac{3}{5} \\[5ex] LCD\:\:of\:\:2,12,5 = 60 \\[3ex] = \dfrac{210}{60} - \dfrac{175}{60} + \dfrac{36}{60} \\[5ex] = \dfrac{210 - 175 + 36}{60} \\[5ex] = \dfrac{71}{60} \\[5ex] = 1\dfrac{11}{60}$
(18.) ACT If the positive integers $x$ and $y$ are relatively prime (their greatest common factor is $1$) and $\dfrac{1}{2} + \dfrac{1}{3} * \dfrac{1}{4} \div \dfrac{1}{5} = \dfrac{x}{y}$, then $x + y = ?$
$A.\:\: 23 \\[3ex] B.\:\: 25 \\[3ex] C.\:\: 49 \\[3ex] D.\:\: 91 \\[3ex] E.\:\: 132 \\[3ex]$
$\dfrac{1}{2} + \dfrac{1}{3} * \dfrac{1}{4} \div \dfrac{1}{5} \\[5ex] PEMDAS \\[3ex] \dfrac{1}{2} + \dfrac{1 * 1}{3 * 4} \div \dfrac{1}{5} \\[5ex] = \dfrac{1}{2} + \dfrac{1}{12} \div \dfrac{1}{5} \\[5ex] = \dfrac{1}{2} + \dfrac{1}{12} * \dfrac{5}{1} \\[5ex] = \dfrac{1}{2} + \dfrac{1 * 5}{12 * 1} \\[5ex] = \dfrac{1}{2} + \dfrac{5}{12} \\[5ex] LCD = 12 \\[3ex] = \dfrac{6}{12} + \dfrac{5}{12} \\[5ex] = \dfrac{6 + 5}{12} \\[5ex] = \dfrac{11}{12} = \dfrac{x}{y} \\[5ex] \implies x = 11,\:\: y = 12 \\[3ex] x + y = 11 + 12 = 23$ |
# LaTeX error: can't write file
I'm using Miktex 2.9 on windows 7 x32, texniccenter as IDE. I encounter an error for a project that worked few days ago: when I compile, I get the message:
! I can't write on file ../preamble1.aux'.
The beginning of the .tex file is:
\documentclass[xcolor=dvipsnames]{beamer}
\include{../preamble1}
\begin{document}
What should I do?
Thanks.
-
What is the contents of preamble1.tex? – xport Dec 26 '10 at 16:41
Why did you put \include in the preamble?
Please change to \input.
@Imsasu, I haven't seen \include in the preamble. Even though you use \documentclass{book}, you can use \includeonly only and not \include in the preamble. It is what I know. :-) If you feel my answer is the final solution, kindly mark it with green check :-) – xport Dec 26 '10 at 16:47
See this answer for an explanation of why \input works and why \include` does not, why this has changed with recent tex distributions, and how to work around it for old documents, if necessary. |
# Reverse dots and boxes, swastika edition
Alistair and Roberta are playing a game of reverse dots and boxes.
• The players take turns adding one horizontal or vertical line in one free spot on the grid (marked with light gray lines in the below image). Alistair goes first.
• If a move completes a $$1\times1$$ box, the player gets one point and has to make another move. If two boxes are completed with a single move, the player gets two points but only has to make one additional move. The player keeps making moves until they make a move which does not complete a $$1\times1$$ box.
• The game ends when all possible lines have been drawn.
• Since this is a reverse game, the player with the most points loses.
Which of the players can win the game played in the above grid? What strategy should they use?
First player wins
Capture two squares from three of the four wings, then cut off the last wing. Player two can no longer make any moves that don't result in a capture.
This wins, 6-7.
Example:
• Yep, you are right. Jul 10 '19 at 20:11
• Found the same, but with the last move one square to the left. Nice job!
– Bass
Jul 10 '19 at 20:21
• This is wrong: the second player will necessarily finish two squares simultaneously and hence have only one more move after that, so they won't get 7 points in the end. Jul 10 '19 at 20:37
• @ArnaudMortier can you provide an example? I can't find a move that can be made that won't complete a box Jul 10 '19 at 20:56
• I quote the OP: If two boxes are completed with a single move, the player gets two points but only has to make one additional move. Jul 10 '19 at 21:07
Second player wins.
First player's first move, assuming they don't take any squares, can only be to split an arm with two on the end or cut off an arm, taking one of the sides on the middle square.
After that,
If first player split an arm, second player cuts off that arm, taking one square. If first player cut off an arm, second player ignores it. Now the center square has one edge filled.
Then,
Second player takes an entire arm. That leaves the center with two edges marked. There are no more branches, just a chain of 2 or 3, and a chain of 7. Second player reduces the chain of 7 by taking the end of an untouched arm. Now there's a chain of 6. They split that in two.
On first player's second turn,
They face chains of length 2,3,3 or 3,3,3. Either way, every move they take completes a square, so they take all remaining squares.
In the end,
The second player has taken 4 or 5 squares, and first player has taken 8 or 9.
Alternately,
First player can force the second player to end the game on their turn. They take two entire arms and take the end of one arm, leaving a single chain of 6. They then split that chain in half, leaving the second player with chains of length 3,3. Second player must then take all the rest. First player still loses with 7 squares to second player's 6. But this strategy allows them to lose by the least amount. |
# TITAN2D
Developer(s) Comparison of field observation and simulations for flows at Colima volcano Geophysical Mass Flow Group 2.0.0 / July 21, 2007 Unix-like Geoflow Simulator NCSA Open Source License GMFG
TITAN2D is a geoflow simulation software application, intended for geological researchers. It is distributed as free software.
## Overview
TITAN2D is a free software application developed by the Geophysical Mass Flow Group at the State University of New York (SUNY) at Buffalo. TITAN2D was developed for the purpose of simulating granular flows (primarily geological mass flows such as debris avalanches and landslides) over digital elevation models (DEM)s of natural terrain. The code is designed to help scientists and civil protection authorities assess the risk of, and mitigate, hazards due to dry debris flows and avalanches. TITAN2D combines numerical simulations of a flow with digital elevation data of natural terrain supported through a Geographical Information System (GIS) interface such as GRASS.
TITAN2D is capable of multiprocessor runs. A Message Passing Interface (MPI) Application Programming Interface (API) allows for parallel computing on multiple processors, which effectively increases computational power, decreases computing time, and allows for the use of large data sets.
Adaptive gridding allows for the concentration of computing power on regions of special interest. Mesh refinement captures the complex flow features that occur at the leading edge of a flow, as well as locations where rapid changes in topography induce large mass and momentum fluxes. Mesh unrefinement is applied where solution values are relatively constant or small to further improve computational efficiency.
TITAN2D requires an initial volume and shape estimate for the starting material, a basal friction angle, and an internal friction angle for the simulated granular flow. The direct outputs of the program are dynamic representations of a flow's depth and momentum. Secondary or derived outputs include flow velocity, and such field-observable quantities as run-up height, deposit thickness, and inundation area.
## Mathematical Model
The TITAN2D program is based upon a depth-averaged model for an incompressible Coulomb continuum, a “shallow-water” granular flow. The conservation equations for mass and momentum are solved with a Coulomb-type friction term for the interactions between the grains of the media and between the granular material and the basal surface. The resulting hyperbolic system of equations is solved using a parallel, adaptive mesh, Godunov scheme. The basic form of the depth-averaged governing equations appear as follows.
The depth-averaged conservation of mass is:
${\displaystyle {\underbrace {\partial h \over \partial t} }_{\begin{smallmatrix}{\text{Change}}\\{\text{in mass}}\\{\text{over time}}\end{smallmatrix}}+\underbrace {{\partial {\overline {hu}} \over \partial x}+{\partial {\overline {hv}} \over \partial y}} _{\begin{smallmatrix}{\text{Total spatial}}\\{\text{variation of}}\\{\text{x,y mass fluxes}}\end{smallmatrix}}=0}$
The depth-averaged x,y momentum balances are:
${\displaystyle {\underbrace {\partial {\overline {hu}} \over \partial t} }_{\begin{smallmatrix}{\text{Change in}}\\{\text{x mass flux}}\\{\text{over time}}\end{smallmatrix}}+\underbrace {{\partial \over \partial x}\left({\overline {hu^{2}}}+{1 \over 2}{k_{ap}g_{z}h^{2}}\right)+{\partial {\overline {huv}} \over \partial y}} _{\begin{smallmatrix}{\text{Total spatial variation}}\\{\text{of x,y momentum fluxes}}\\{\text{in x-direction}}\end{smallmatrix}}=\underbrace {-hk_{ap}\operatorname {sgn} \left({\partial u \over \partial y}\right){\partial hg_{z} \over \partial y}\sin \phi _{int}} _{\begin{smallmatrix}{\text{Dissipative internal}}\\{\text{friction force}}\\{\text{in x-direction}}\end{smallmatrix}}-\underbrace {{u \over {\sqrt {u^{2}+v^{2}}}}\left[g_{z}h\left(1+{u^{2} \over r_{x}g_{z}}\right)\right]\tan \phi _{bed}} _{\begin{smallmatrix}{\text{Dissipative basal}}\\{\text{friction force}}\\{\text{in x-direction}}\end{smallmatrix}}+\underbrace {g_{x}h} _{\begin{smallmatrix}{\text{Driving}}\\{\text{gravitational}}\\{\text{force in}}\\{\text{x-direction}}\end{smallmatrix}}}$
${\displaystyle {\underbrace {\partial {\overline {hv}} \over \partial t} }_{\begin{smallmatrix}{\text{Change in}}\\{\text{y mass flux}}\\{\text{over time}}\end{smallmatrix}}+\underbrace {{\partial {\overline {huv}} \over \partial x}+{\partial \over \partial y}\left({\overline {hv^{2}}}+{1 \over 2}{k_{ap}g_{z}h^{2}}\right)} _{\begin{smallmatrix}{\text{Total spatial variation}}\\{\text{of x,y momentum fluxes}}\\{\text{in y-direction}}\end{smallmatrix}}=\underbrace {-hk_{ap}\operatorname {sgn} \left({\partial v \over \partial x}\right){\partial hg_{z} \over \partial x}\sin \phi _{int}} _{\begin{smallmatrix}{\text{Dissipative internal}}\\{\text{friction force}}\\{\text{in y-direction}}\end{smallmatrix}}-\underbrace {{v \over {\sqrt {u^{2}+v^{2}}}}\left[g_{z}h\left(1+{v^{2} \over r_{y}g_{z}}\right)\right]\tan \phi _{bed}} _{\begin{smallmatrix}{\text{Dissipative basal}}\\{\text{friction force}}\\{\text{in y-direction}}\end{smallmatrix}}+\underbrace {g_{y}h} _{\begin{smallmatrix}{\text{Driving}}\\{\text{gravitational}}\\{\text{force in}}\\{\text{y-direction}}\end{smallmatrix}}}$ |
Open access peer-reviewed chapter
# Organic Rankine Cycle for Recovery of Liquefied Natural Gas (LNG) Cold Energy
By Junjiang Bao
Submitted: February 1st 2018Reviewed: April 25th 2018Published: November 5th 2018
DOI: 10.5772/intechopen.77990
## Abstract
Natural gas (NG) is an environment-friendly energy source. NG is of gas state in the environmental condition and it is liquefied to LNG at temperature of about −162°C for transportation and storage. Electric energy of 292–958 kWh is consumed when one ton of LNG is produced. Before being used, LNG must be regasified to NG again at the receiving site, and this process will release a great deal of energy, which is called cold energy. It’s very important to recovery LNG cold energy, which is clean and of high quality. Power generation is a conventional and effective way to utilize LNG cold energy. For the low efficiency of the traditional power generation system with liquefied natural gas (LNG) cold energy utilization, by improving the heat transfer characteristic between the working fluid and LNG, this chapter has proposed a conception of multi-stage condensation Rankine cycle system. Furthermore, the performance of power generation systems will be enhanced with two aspects: improvement of system configuration and optimization of working fluids.
### Keywords
• organic Rankine cycle
• LNG cold energy
• two-stage condensation Rankine cycle
• zeotropic mixture
• system configuration
## 1. Introduction
Energy shortage and environmental pollution are two major themes in today’s world [1]. More and more attentions are paid to Natural gas (NG) because it is clean and has high calorific value [2, 3], and it is widely consumed all over the world [4]. NG is of gas state in the environmental condition and it is liquefied to LNG at temperature of about −162°C for transportation and storage [5]. Electric energy of 292–958 kWh is consumed when one ton of LNG is produced [6]. Before being used, LNG must be regasified to NG again at the receiving site, and this process will release a great deal of energy, which is called cold energy [7]. It is very important to recovery LNG cold energy which is clean and of high quality. Usually LNG is heated by sea water or air, so that LNG cold energy is wasted and the sea near the regasification site also is affected [8]. Therefore, the recovery of the LNG cold energy a dual purpose [8].
One of effective ways to utilize LNG cold energy is power generation [9]. The traditional cycles include direct expansion cycle (DE), organic Rankine cycle (ORC) and combined cycle (CC) [10]. Although the simplicity for direct expansion cycle, it has limited applications with low efficiency and high operation pressure. Organic Rankine cycle and combined cycle are more popular and relatively mature. Osaka Gas Company in Japan built ORC and CC system using propane in 1979 and 1982, and the power output reached 1450 and 6000 kW, respectively [11]. Due to the importance of system parameters on the performance of power generation system, many researches are carried out. With seawater as heat source and LNG as heat sink, Kim et al. [12] found that there is an optimum condenser outlet temperature for ORC system. Heat source inlet temperature, evaporation pressure and condensation temperature are studied by Wang et al. [13] to achieve high exergy efficiency of ORC recovering LNG cold energy. With the heat integration of LNG at vaporization pressure of 70 bar, Koku et al. [14] obtained a thermal efficiency of 6% for the combined cycle with propane as working fluid.
Improvement of system structure and proper working fluid selection are two effective way to enhance the system performance. For system structure, the combinations of simple Rankine cycle in series or parallel is often considered by Zhang et al. [7] and García et al. [15] and they found that they were indeed more efficient. Cascaded Rankine cycles are also common improvement and are proved to be superior to simple Rankine cycles by Li et al. [16], Choi et al. [17], Cao et al. [18], and Wang et al. [19]. By combining the Rankine cycle and refrigeration cycle, the study of Zhang et al. [20] showed that both electricity and refrigeration can be produced simultaneously. Mosaffa et al. [21] compared four different cycles, and pointed out that different system structure is best when the objective function changes.
Selection of working fluids is also critical for the performance and economy of system except for cycle structure. By using eight kinds of working fluids, Zhang et al. [7] found n-pentane has the best system performance. A comparative study by Sung et al. [22] showed that R123 were the optimal working fluids for a dual-loop cycle with LNG cold energy as heat sink. Considering ethane, ethene, carbon dioxide, R134a, R143a and propene, Ferreira et al. [23] concluded that ethene and ethane had higher system efficiency. Zeotropic mixtures are also considered in power generation system for recovery of LNG cold energy. Ammonia-water mixture is used by Wang et al. [24], and they found there was an optimal mass fraction at which work output was largest. With R601-R23-R14 ternary mixture as the working fluid, Lee et al. [25] found that the exergy loss of ORC using mixture is lower than that of pure fluids. Kim et al. [26] selected R14-propane mixture as the working fluids for the first stage of a cascaded system and ethane-n-pentane mixture for the other two stages. Modi and Haglind [27] thought that zeotropic mixture is the development direction of working fluids with its higher thermodynamic performance.
In this chapter, the performance of power generation systems by LNG cold energy will be enhanced with two aspects: improvement of system configuration and optimization of working fluids. Firstly, the two-stage condensation Rankine cycle is introduced. Based on this, the effect of stage number of condensation process is discussed. Then, the influence of the arrangements for compression process and expansion process is studied. Regarding to the optimization of working fluids, pure working fluids are firstly compared, and then zeotropic mixtures are optimized. Finally, a simultaneous approach to optimize the component and composition of zeotropic mixture is put forward.
## 2. Improvement of system configuration
### 2.1. Two-stage condensation Rankine cycle (TCRC)
As shown in Figure 1, the TCRC system consists of an evaporator, two turbines, two condensers, a mixer, a splitter and two feed pumps. After heated in the evaporator by sea water, working fluid is evaporated to vapor and is divided into two streams in the splitter. The two streams flow into different turbines respectively and are expanded to two different condensation pressures. These two streams transferred heat energy to LNG in two different condensers and cooled to liquid. The two streams are pressurized by two different pumps and mixed in the mixer. The converged stream enters the evaporator again and the new cycle recommences. Except for absorbing the condensation heat from working fluids in two condensers, LNG is further heated to the scheduled temperature in the reheater with sea water. T-s diagram of the TCRC system is plotted in Figure 2, and the labeled state points in Figure 2 is the same as that in Figure 1.
In order to determine whether the new proposed cycle has a better performance, the novel system is compared with the conventional methods under the same conditions.
Net power output, thermal efficiency and exergy efficiency of TCRC system is compared with the traditional cycles (DEC, ORC and CC), as shown in Figure 3. It should be pointed out that four systems all used propane as working fluid. From Figure 3, it can be found that the performance of proposed system is remarkably superior to the traditional power generation cycles. Combined cycle has the highest net power output, thermal efficiency and exergy efficiency among the traditional systems. However, compared with CC system, TCRC system has a 45.27%, 42.91% and 52.31% increase respectively, in term of net power output, thermal efficiency and exergy efficiency.
In order to explain the reason why TCRC system could have a better performance than the traditional cycle, the heat transfer curves between working fluid and LNG of ORC and TCRC systems are plotted in Figure 4. It can be seen from Figure 4 that heat transfer irreversibility of ORC system is larger than that of TCRC system. The main reason is that compared with ORC system, the condensation process of TCRC system is two-stage, which could lower the heat transfer irreversibility of the condenser.
### 2.2. Effects of stage number of condensation process
In the previous section, it has been proved that two-stage condensation process has the potential to improve the performance of power generation systems by LNG cold energy. If the number of condensation stage is increased, the performance of power generation systems should be better at the cost of greater initial investment with more equipment. How many stages of condensation process should be chosen?
Figure 5 shows the schematic of six different cycles from single-stage to three-stage condensation Rankine cycle with or without direction expansion. To take a comparison object, direction expansion cycle (DC) is also considered.
Net power output of system:
Wnet=Wtur,jWp,lE1
The electricity production cost (EPC) can be expressed as:
EPC=3600CtotalWnetE2
The annual total net income (ATNI) of the system can be defined as:
ATNI=7300EPEPCWnetE3
where EP is electricity price.
From Figure 6 it can be seen that the net power output of the 3CC is the largest and the DE is the least at any LNG vaporization pressure. When stage number of condensation process increases, the net power output of Rankine cycles and combined cycles both increases. The performance of combined cycles is better than that of Rankine cycles at the same stage number of condensation process.
Figure 7 shows the minimum EPC of seven different cycles at different LNG vaporization pressures. The EPC of the Rankine cycle is larger than that of the combined system at the same stage number of condensation process. The EPC of combined cycle is the least at the LNG vaporization pressure less than 30 bar. With the increase of the stage number of condensation process, EPC of combined cycles and Rankine cycles augments, but its increase rate decreases. When the LNG vaporization pressure increases, the difference of EPC between combined cycles and Rankine cycles at the same stage number of condensation process tends to zero. The electricity prices of literatures are different, such as 0.04, 0.061, 0.1, 0.123 and 0.18$/kWh [28]. DC and CC systems should be selected at the LNG vaporization pressure less than 30 bar if the electricity price is 0.04$/kWh. No system is profitable at the LNG vaporization pressure of 70 bar. The CC systems are suitable at all the LNG vaporization pressure when the electricity price is 0.061$/kWh. At the LNG vaporization pressure less than 30 bar, it should be considered DE system. Seven cycles could be profitable if electricity price is larger than 0.1$/kWh.
The capacity of power generation can be weight by net power output, and whether cycle is profitable could evaluated by EPC. But the maximum profitability of the system is determined by both the net power output and EPC, which be reflected by annual net income. The electricity price of 0.123 \$/kWh is taken as the referenced electricity price. It can be seen from Figure 8 that the annual net income of the 3CC system is largest, while the least is the DC cycle. The annual net income of the Rankine cycles is lower than that of the combined cycles at the same stage number of the condensation process. When the stage number of the condensation process increases, both the annual net income of the Rankine cycle and the combined cycle systems goes up, but their increase rates decrease.
### 2.3. Influence of the arrangements for compression process and expansion process
In the field of utilizing LNG cold energy by ORC (organic Rankine cycle), most studies focus on how to reduce the irreversible loss of the heat exchange process but pay little attention to the arrangements for compression and expansion process. The compression and expansion process, as the parts of the cycle that consumes and products energy, affect the cycle performance as well due to that their different arrangements make the efficiency of the component different.
The structures of four different two-stage condensation Rankine cycles are shown in Figure 9. There are two types of arrangements for the pumps in the compression process. The arrangement a shown in Figure 9 is called parallel compression arrangement. The other arrangement b shown in Figure 9 is called series compression arrangement. Similarly, there are also two types of arrangements for the turbines in the expansion process. The arrangement c shown in Figure 9 is called parallel expansion arrangement. The arrangement d shown in Figure 9 is called series expansion arrangement.
This paper takes 80% as the reference efficiency when the turbine efficiency is constant. When the turbine efficiency is non-constant, this paper adopted the turbine efficiency prediction model with the turbine size parameter (SP) and the specific volume (Vr) as the input parameters, as is shown in Eq. (4).
ηturb,is=Σn=015FnAnE4
where Fn is input parameter SP and Vr, and An is the regression coefficients, which could be found in Ref. [29].
In order to study the arrangements of the pumps, cycle 1 is compared with cycle 3 with constant turbine efficiency and same arrangements of the turbines, as shown in Figure 10a. Although the condensation temperatures vary within a range, cycle 1 performs almost the same as Cycle 3, which indicating that the impact of the arrangements of the pumps on the system performance is little. The reason is that the consumed power of WF-pump 1 and WF-pump 2 is small (< 0.05 kW), which has a very little effect on the net power output.
To investigate the arrangements of the turbines, Cycle 1 is compared with Cycle 2, as shown in in Figure 10b. It can be seen that the net output power of Cycle 2 is always a little higher than that of Cycle 1 at different condensation temperatures, which suggests that the series compress arrangement performs better than the parallel.
Net power output of cycle 1 is compared with cycle 2 and cycle 3 with non-constant turbine efficiency, as shown in Figure 11. It could be found that the impact of the arrangements for pumps on the system performance is little but the influence of the arrangements for turbine is great. The series arrangement for turbines has a greater impact on the system performance than the parallel arrangement. Meanwhile, this impact for non-constant turbine efficiency is much more pronounced than that for constant turbine efficiency, with comparing Figures 10 and 11.
## 3. Optimization of working fluids
### 3.1. Pure working fluids
For power generation systems using LNG cold energy, the choice of working fluid has a great influence on the performance of the system. Due to the low temperature of the LNG, it is necessary to consider several aspects when selecting working fluid. Based on the previous study, this paper selects 11 kinds of working fluids, including hydrocarbons (HCs) and hydrofluorocarbons (HFCs), and the physical properties of them are shown in Table 1.
Working fluidsChemical formulaCritical temperature (°C)Critical pressure (bar)Normal boiling point (°C)
R170C2H632.1748.72−88.82
R1270C3H691.0645.55−47.62
R290C3H896.7442.51−42.11
i-C4H8144.9440.09−7.00
R600n-C4H10151.9837.96−0.49
R601n-C5H12196.5533.7036.06
R23CHF326.1448.32−82.09
R134aC2H2F4101.0640.59−26.07
R125C2HF566.0236.18−48.09
R116C2F619.8830.48−78.09
R218C3F871.8726.40−36.79
### Table 1.
Physical properties of selected pure working fluids.
The evaporation temperature, the condensation temperatures and the inlet pressure of NG turbine of the two-stage condensation combined cycle are optimized with the net power output as objective function. T The maximum net power output and the critical temperatures of the 11 different pure working fluids are shown in Figure 12.
It can be seen from Figure 12 that the net power output of the two-stage condensation combined cycle is the largest when n-C5H12 is chosen as working fluid, and the net power output of C2F6 is the least. From the trend lines of the net power output and the critical temperature for 11 kinds of working fluids, it can be found that the variation trend of the net power output is approximately the same as that of the critical temperature of working fluids. With the increase of the critical temperature, the net power output of the system increases roughly.
### 3.2. Mixed working fluids
In this section, 11 pure working fluids are combined to binary mixtures. With the net power output as the objective function, evaporation temperature, condensation temperatures, the inlet pressure of the NG turbine and the molar fraction of binary working fluids are optimized. When the net power output of the system is maximum, the optimized results of different binary mixtures are shown in Figure 13.
The gray dotted line in Figure 13b represents the trend line of net power output of 11 pure working fluids, and the black dotted line represents the trend line of maximum net power output in each column. From Figure b, it can be found that the optimal net power output for pure fluids changes from 2158.49 to 2712.41 kW. While the optimal net power output for mixtures distributes between 2894.47 and 3107.91 kW, which has an obvious increase than that for pure fluids and the variation range for mixtures is much smaller than that of pure fluids.
Figure 14 shows the maximum net power output of the two-stage condensation combined cycle when the component numbers of working fluids change from one to five. When the component number of mixed working fluid is five, it is actually quaternary mixture due to the results of optimization. As shown in Figure 14, with the increase of the component number of mixed working fluid, the net power output of the two-stage condensation combined cycle is increased, but the increase rate is gradually reduced. When the component numbers of the mixed working fluid are three and four, the net power output of the system is almost the same. With the increase of the component number of mixture, the difficulty of charging working fluids into system becomes significant. Therefore, considering the increase rate of the net power output and the difficulty of charging working fluids, the optimum component number of hydrocarbon mixtures is three for the two-stage condensation combined cycle.
### 3.3. A simultaneous approach to optimize the component and composition of zeotropic mixture
The traditional method of determining the components and compositions of mixtures is firstly to predefine some fluids, and then, according to the number of components, these fluids are chosen and combined as the component of mixed working fluids one by one. At last, the compositions of the formed mixtures and the corresponding system parameters are optimized at the specified system structure respectively. It is difficult to optimize the components of a mixture, because the components of the mixed working fluids are independent of each other and discrete, and meanwhile it is difficult to describe them with mathematical variables. In order to reduce the intensity of calculation for components and compositions of zeotropic mixtures and achieve the simultaneous optimization of components and compositions for zeotropic mixtures, a selective coefficient ai is introduced, as shown in Figure 15. Because the components of mixture are discrete, only discrete variables can be used to describe them. Each component of mixture is expressed by a selective coefficient. The selective coefficient ai is a binary variable, and it has two values 0 or 1. When the value of selective coefficient ai is 1, the component expressed by this selective coefficient is selected. While the value of selective coefficient ai is 0, it means this component is not selected. The sum of the selective coefficient ai is used to control the number of component for mixture. For example, binary mixture can be optimized by Σai = 2. While the composition xi of each component is the continuous variable and its value is between 0 and 1. There are no constraint conditions for composition xi, but the total sum of compositions for all the selected components should be 1, i.e., Σaixi = 1.
Net power output is selected as the objective function, and the optimization variables include the selective coefficients of components, operation parameters of system and compositions of components. For the two-stage condensation Rankine cycle shown in Figure 1, the main operation parameters are evaporation temperature, the first-stage condensation temperature and the second-stage condensation temperature. The range of control variables and their optimal results are shown in Table 2.
Control variablesRangeResults of pure fluidOptimal results of binary mixtureOptimal results of ternary mixture
Ethane selective coefficient a1{0,1}000
Ethylene selective coefficient a2{0,1}000
Propylene selective coefficient a3{0,1}001
Propane selective coefficient a4{0,1}000
Butane selective coefficient a5{0,1}000
Isobutane selective coefficient a6{0,1}011
Pentane selective coefficient a7{0,1}111
R23 selective coefficient a8{0,1}000
R32 selective coefficient a9{0,1}000
R41 selective coefficient a10{0,1}000
R116 selective coefficient a11{0,1}000
Ethane mole fraction x1[0,1]0.4410.4850.598
Ethylene mole fraction x2[0,1]0.1620.4880.254
Propylene mole fraction x3[0,1]0.3990.5270.492
Propane mole fraction x4[0,1]0.2110.4100.852
Butane mole fraction x5[0,1]0.7520.5390.570
Isobutane mole fraction x6[0,1]0.8250.8360.319
Pentane mole fraction x7[0,1]10.1640.189
R23 mole fraction x8[0,1]0.6220.7340.494
R32 mole fraction x9[0,1]0.3000.3900.351
R41 mole fraction x10[0,1]0.7050.5030.188
R116 mole fraction x11[0,1]0.5830.3040.452
Evaporation temperature x12 (°C)[5,10]6.310.010.0
First-stage condensation temperature x13 (°C)[−140, −90]−100.0−113.9−129.7
Second-stage condensation temperature x14 (°C)[−80, −40]−42.9−56.2−71.2
### Table 2.
The range of control variables and their optimal results in case 2.
Table 2 shows that the best ternary mixture is propylene/isobutane/pentane (0.492/0.319/0.189, by mole fraction), and the optimum binary mixture isobutane/pentane (0.836/0.164, by mole fraction). Pure fluid pentane is best among pure fluids.
## 4. Conclusions
This chapter has proposed a conception of multi-stage condensation Rankine cycle (TCRC) system. The performance of the power generation systems is enhanced by two aspects: improvement of system configuration and optimization of working fluids. Compared with the combined cycle, the net work output, thermal efficiency and exergy efficiency of the TCRC system are respectively increased by 45.27, 42.91 and 52.31%. The two-stage condensation Rankine cycle is more suitable from the viewpoint of economy. For the arrangements for compression process and expansion process of TCRC, the arrangements for pumps have little impact on the net output power and the series arrangement for turbines performs better than the parallel arrangement. With the increase of the critical temperature for pure fluids, the net power output of the system increases roughly. Zeotropic mixture can improve the performance, and the optimum component number of hydrocarbon mixtures is three for the two-stage condensation combined cycle. A simultaneous approach to optimize the component and composition of zeotropic mixture is put forward which can reduce the consumed calculation time greatly.
## Acknowledgments
This research was financially supported by the National Natural Science Foundation of China (No. 51606025).
## Conflict of interest
The author declared that there is no conflict of interest.
## More
© 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
## How to cite and reference
### Cite this chapter Copy to clipboard
Junjiang Bao (November 5th 2018). Organic Rankine Cycle for Recovery of Liquefied Natural Gas (LNG) Cold Energy, Organic Rankine Cycle Technology for Heat Recovery, Enhua Wang, IntechOpen, DOI: 10.5772/intechopen.77990. Available from:
### Related Content
#### Organic Rankine Cycle Technology for Heat Recovery
Edited by Enhua Wang
Next chapter
#### Power Generation with Thermolytic Reverse Electrodialysis for Low-Grade Waste Heat Recovery
By Deok Han Kim, Byung Ho Park, Kilsung Kwon, Longnan Li and Daejoong Kim
First chapter
#### The Theory of Random Transformation of Dispersed Matter
By Marek Solecki
We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities.
View all Books |
# Why is a compound as common as water considered a "weird" chemical substance?
Dec 1, 2016
#### Explanation:
Water contains hydrogen bonds between the molecules which means that it will remain as a liquid at much higher temperatures than you would expect. This also enables the molecules to 'stick' together and gives water surface tension. The way that water forms into a solid also means that the solid of water (ice) is much less dense than the liquid, allowing it to float.
Dec 1, 2016
Compare the normal boiling point of water to that of $C {H}_{4}$, $S i {H}_{4}$, $H F$, ${H}_{2} S$, or $P {H}_{3}$. The boiling point of water is disproportionately high. This is also unusual in that water has a very small molecular mass (and thus little possibility of $\text{van der Waals interactions}$.
I would consult a textbook on the unusual properties of water. These properties are largely the result of the propensity of the water to $\text{hydrogen-bond}$ intermolecularly. |
# Pacing protocols¶
class myokit.Protocol
Represents a pacing protocol as a sequence of events.
Every event represents a time span during which the stimulus is non-zero. All events specify a stimulus level, a starting time and a duration. The stimulus level is given as a dimensionless number where 0 is no stimulus and 1 is a “full” stimulus. No range is specified: 2 would indicate “twice the normal stimulus”.
Periodic events can be created by specifying their period. A period of 0 is used for non-periodic events. The number of times a periodic event occurs can be set using the multiplier value. For any non-zero value, the event will occur exactly this many times.
If an event starts while another event is already active, the new event will determine the pacing value and the old event will be deactivated.
Scheduling two events to occur at the same time will result in an error, as the resulting behavior would be undefined. When events are scheduled with the same starting time an error will be raised immediately. If the clash only happens when an event re-occurs, an error will be raised when using the protocol (for example during a simulation).
An event with start time a and duration b will be active during the interval [a, b). In other words, time a will be the first time it is active and time b will be the first time after a at which it is not.
add(e)
(Re-)schedules an event.
add_step(level, duration)
Appends an event to the end of this protocol. This method can be used to easily create voltage-step protocols.
level
The stimulus level. 1 Represents a full-sized stimulus. Only non-zero levels should be set.
duration
The length of the stimulus.
A call to p.append(level, duration) is equivalent to p.schedule(level, p.characteristic_time(), duration, 0, 0).
characteristic_time()
Returns the characteristic time associated with this protocol.
The characteristic time is defined as the maximum characteristic time of all events in the protocol. For a sequence of events, this is simply the protocol duration.
Examples:
>>> import myokit
>>> # A sequence of events
>>> p = myokit.Protocol()
>>> p.schedule(1, 0, 100)
>>> p.schedule(1, 100, 100)
>>> p.characteristic_time()
200.0
>>> # A finitely reoccurring event
>>> p = myokit.Protocol()
>>> p.schedule(1, 100, 0.5, 1000, 3)
>>> p.characteristic_time()
3100.0
>>> # An indefinitely reoccurring event, method returns period
>>> p = myokit.Protocol()
>>> p.schedule(1, 100, 0.5, 1000, 0)
>>> p.characteristic_time()
1000.0
clone()
Returns a deep clone of this protocol.
code()
Returns the mmt code representing this protocol and its events.
create_log_for_interval(a, b, for_drawing=False)
Creates a myokit.DataLog containing the entries time and pace representing the value of the pacing stimulus at each point.
The time points in the log will be on the interval [a, b], such that every time at which the pacing value changes is present in the log.
If for_drawing is set to True each time value between a and b will be listed twice, so that a vertical line can be drawn from the old to the new pacing value.
create_log_for_times(times)
Creates a myokit.DataLog containing the entries time and pace representing the value of the pacing stimulus at each point.
The time entries times must be an non-descreasing series of non-negative points.
events()
Returns a list of all events in this protocol.
guess_duration()
Deprecated
This method now returns the value given by characteristic_time().
head()
Returns the first event in this protocol.
If the protocol is empty, None will be returned.
in_words()
Returns a description of this protocol in words.
is_infinite()
Returns True if (and only if) this protocol contains indefinitely recurring events.
is_sequence(exception=False)
Checks if this protocol is a sequence of non-periodic steps.
The following checks are performed:
1. The protocol does not contain any periodic events
2. The protocol does not contain any overlapping events
If exception is set to True, the method will raise an Exception instead of returning False. This allows you to obtain specific information about the check that failed
is_unbroken_sequence(exception=False)
Checks if this protocol is an unbroken sequence of steps. Returns True only for an unbroken sequence.
The following checks are performed:
1. The protocol does not contain any periodic events
2. The protocol does not contain any overlapping events
3. Each new event starts where the last ended
If exception is set to True, the method will raise an Exception instead of returning False. This allows you to obtain specific information about the check that failed
levels()
Returns the levels of the events scheduled in this protocol.
For unbroken sequences of events this will produce a list of the levels visited by the protocol. For sequences with gaps or protocols with periodic events the relationship between actual levels and this method’s output is more complicated.
pop()
Removes and returns the event at the head of the queue.
range()
Returns the minimum and maximum levels set in this protocol. Will return 0, 0 for empty protocols.
schedule(level, start, duration, period=0, multiplier=0)
Schedules a new event.
level
The stimulus level. 1 Represents a full-sized stimulus. Only non-zero levels should be set.
start
The time this event first occurs.
duration
The length of the stimulus.
period (optional)
This event’s period, or 0 if it is a one-off event.
multiplier (optional)
For periodic events, this indicates the number of times this event occurs. Non-periodic events or periodic events that continue indefinitely can use 0 here.
tail()
Returns the last event in this protocol. Note that recurring events can be rescheduled, so that the event returned by this method is not necessarily the last event that would occur when running the protocol.
If the protocol is empty, None will be returned.
class myokit.ProtocolEvent(level, start, duration, period=0, multiplier=0)
Describes an event occurring as part of a protocol.
characteristic_time()
Returns a characteristic time associated with this event.
The time is calculated as follows:
Singular events
characteristic_time = start + duration
Finitely recurring events
characteristic_time = start + multiplier * period
Indefinitely recurring events, where start + duration < period
characteristic_time = period
Indefinitely recurring events, where start + duration >= period
characteristic_time = start + period
Roughly, this means that for finite events the full duration is returned, while indefinitely recurring events return the time until the first period is completed.
clone()
Returns a clone of this event.
Note that links to other events are not included in the copy!
code()
Returns a consistently formatted string representing an event.
duration()
Returns this even’t duration.
in_words()
Returns a description of this event.
level()
Returns this event’s pacing level.
multiplier()
Returns the number of times this event recurs. Zero is returned for singular events and indefinitely recurring events.
next()
If this event is part of a myokit.Protocol, this returns the next scheduled event.
period()
Returns this event’s period (or zero if the event is singular).
start()
Returns the time this event starts.
stop()
Returns the time this event ends, i.e. start() + duration().
class myokit.PacingSystem(protocol)
This class uses a myokit.Protocol to update the value of a pacing variable over time.
A pacing system is created by passing in a protocol:
import myokit p = myokit.load_protocol(‘example’) s = myokit.PacingSystem(p)
The given protocol will be cloned internally before use.
Initially, all pacing systems are at time 0. Time can be updated (but never moved back!) by calling advance(new_time)(). The current time can be obtained with time(). The value of the pacing variable is obtained from pace(). The next time the pacing variable will change can be obtained from next_time().
A pacing system can be used to calculate the values of the pacing variable at different times:
>>> import myokit
>>> s = myokit.PacingSystem(p)
>>> import numpy as np
>>> time = np.linspace(0, 1000, 10001)
>>> pace = np.array([s.advance(t) for t in time])
advance(new_time, max_time=None)
Advances the time in the pacing system to new_time. If max_time is set, the system will never go beyond max_time.
Returns the current value of the pacing variable.
next_time()
Returns the next time the pacing system will halt at.
pace()
Returns the current value of the pacing variable.
time()
Returns the current time in the pacing system.
## Protocol factory¶
The myokit.pacing module provides a factory methods to facilitate the creation of Protocol objects directly from python. This module is imported as part of the main myokit package.
### Periodic pacing¶
myokit.pacing.bpm2bcl(bpm, m=0.001)
Converts a beats-per-minute number to a basic cycle length in ms. For example a bpm of 60 equals a bcl of 1000ms.
>>> import myokit
>>> print(myokit.pacing.bpm2bcl(60))
1000.0
To use a different unit scaling, change the optional parameter m. For example, with m set to 1 the returned units are seconds:
>>> print(myokit.pacing.bpm2bcl(120, 1))
0.5
myokit.pacing.blocktrain(period, duration, offset=0, level=1.0, limit=0)
Creates an train of block pulses.
Each pulse lasts duration time units and a pulse is initiated every period time units.
An optional offset to the first pulse can be specified using offset and the level of each pulse can be set using level. To limit the number of pulses generated set limit to a non-zero value.
### Step protocols¶
myokit.pacing.constant(level)
Creates a very simple protocol where the pacing variable is held constant at a given level specified by the argument level.
myokit.pacing.steptrain(vsteps, vhold, tpre, tstep, tpost=0)
Creates a series of increasing or decreasing steps away from the fixed holding potential vhold, towards the voltages listed in vsteps.
1. For the first tpre time units, the pacing variable is held at the value given by vhold.
2. For the next tstep time units, the pacing variable is held at a value from vsteps
3. For the next tpost time units, the pacing variable is held at vhold again.
These three steps are repeated for each value in the vsteps.
myokit.pacing.steptrain_linear(vstart, vend, dv, vhold, tpre, tstep, tpost=0)
Creates a series of increasing or decreasing steps away from a holding value (typically a holding potential). This type of protocol is commonly used to measure activation or inactivation in ion channel models.
1. For the first tpre time units, the pacing variable is held at the value given by vhold.
2. For the next tstep time units, the pacing variable is held at a value ranging from vstart to vend, with increments dv.
3. For the next tpost time units, the pacing variable is held at the value vhold again.
These three steps are repeated for each value in the range from vstart up to (but not including) vend, with an increment specified as dv. |
Archive
## Tag Cloud
3d account algorithms announcement archives arduino artificial intelligence assembly async audio bash batch blog bookmarklet booting c sharp c++ challenge chrome os code codepen coding conundrums coding conundrums evolved command line compiling css dailyprogrammer debugging demystification distributed computing downtime embedded systems encryption es6 features event experiment external first impressions future game github github gist graphics hardware hardware meetup holiday html html5 html5 canvas interfaces internet io.js jabber javascript js bin labs learning library linux low level lua maintenance network networking node.js operating systems performance photos php pixelbot portable privacy programming problems project projects prolog protocol protocols pseudo 3d python reddit reference release releases resource review rust secrets security series list server servers software sorting source code control statistics svg technical terminal textures three thing game three.js tool tutorial tutorials twitter ubuntu university update updates upgrade version control visual web website windows windows 10 xmpp
## PixelBot Part 2: Devices need protocols, apparently
So there I was. I'd just got home, turned on my laptop, opened the Arduino IDE and Monodevelop, and then.... nothing. I knew I wanted my PixelBot to talk to the PixelHub I'd started writing, but I was confused as to how I could make it happen.
In this kind of situation, I realised that although I knew what I wanted them to do, I hadn't figured out the how. As it happens, when you're trying to get one (or more, in this case) different devices to talk to each other, there's something rather useful that helps them all to speak the same language: a protocol.
A protocol is a specification that defines the language that different devices use to talk to each other and exchange messages. Defining one before you start writing a networked program is probably a good idea - I find particularly helpful to write a specification for the protocol that the program(s) I'm writing, especially if their function(s) is/are complicated.
To this end, I've ended up spending a considerable amount of time drawing up the PixelHub Protocol - a specification document that defines how my PixelHub server is going to talk to a swarm of PixelBots. It might seem strange at first, but I decided on a (mostly) binary protocol.
Upon closer inspection though, (I hope) it makes a lot of sense. Since the Arduino is programmed using C++ as it has a limited amount of memory, it doesn't have any of the standard string manipulation function that you're used to in C♯. Since C++ is undoubtedly the harder of the 2 to write, I decided to make it easier to write the C++ rather than the C&sharp. Messages on the Arduino side are come in as a byte[] array, so (in theory) it should be easy to pick out certain known parts of the array and cast them into various different fundamental types.
With the specification written, the next step in my PixelBot journey is to actually implement it, which I'll be posting about in the next entry in this series!
## Easier TCP Networking in C♯
I see all sorts of C♯ networking tutorials out there telling you that you have to use byte arrays and buffers and all sorts of other complicated things if you ever want to talk to another machine over the network. Frankly, it's all rather confusing.
Thankfully though, it doesn't have to stay this way. I've learnt a different way of doing TCP networking in C♯ at University (thanks Brian!), and I realised the other day I've never actually written a blog post about it (that I can remember, anyway!). If you know how to read and write files and understand some basic networking concepts (IP addresses, ports, what TCP and UDP are, etc.), you'll have no problems understanding this.
### Server
The easiest way to explain it is to demonstrate. Let's build a quick server / client program where the server says hello to the client. Here's the server code:
// Server.cs
using System;
using System.Net;
using System.Net.Sockets;
using System.IO;
public class Server
{
public Server(int inPort)
{
Port = inPort; string s;
}
{
TcpListener server = new TcpListener(IPAddress.Any, Port);
server.Start();
while (true) {
TcpClient nextClient = await server.AcceptTcpClientAsync();
StreamWriter outgoing = new StreamWriter(nextClient.GetStream()) { AutoFlush = true };
await outgoing.WriteLineAsync($"Hello, {name}!"); Console.WriteLine("Said hello to {0}", name); nextClient.Close(); } } } // Use it like this in your Main() method: Server server = new Server(6666); server.Start().Wait(); Technically speaking, that asynchronous code ought to be running in a separate thread - I've omitted it to make it slightly simpler :-) Let's break this down. The important bit is in the Start() method - the rest is just sugar around it to make it run if you want to copy and paste it. First, we create & start a TcpListener: TcpListener server = new TcpListener(IPAddress.Any, Port); server.Start(); Once done, we enter a loop, and wait for the next client: TcpClient nextClient = await server.AcceptTcpClientAsync(); Now that we have a client to talk to, we attach a StreamReader and a StreamWriter with a special option set on it to allow us to talk to the remote client with ease. The option set on the StreamWriter is AutoFlush, and it basically tells it to flush it's internal buffer every time we write to it - that way things we write to it always hit the TcpClient underneath. Depending on your setup the TcpClient does some internal buffering & optimisations anyway, so we don't need the second layer of buffering here: StreamReader incoming = new StreamReader(nextClient.GetStream()); StreamWriter outgoing = new StreamWriter(nextClient.GetStream()) { AutoFlush = true }; With that, the rest should be fairly simple to understand: string name = (await incoming.ReadLineAsync()).Trim(); await outgoing.WriteLineAsync($"Hello, {name}!");
Console.WriteLine("Said hello to {0}", name);
nextClient.Close();
First, we grab the first line that the client sends us, and trim of any whitespace that's lurking around. Then, we send back a friendly hello message to client, before logging what we've done to the console and closing the connection.
### Client
Now that you've seen the server code, the client code should be fairly self explanatory. The important lines are highlighted:
using System;
using System.Net;
using System.Net.Sockets;
using System.IO;
public class Client
{
public Client(string inHostname, int inPort)
{
Hostname = inHostname; uint a;
Port = inPort;
}
public async Task GetHello(string name) {
TcpClient client = new TcpClient();
client.Connect(Hostname, Port);
StreamWriter outgoing = new StreamWriter(client.GetStream()) { AutoFlush = true };
await outgoing.WriteLineAsync(name);
}
}
// Use it like this in your Main() method:
Client client = new Client("localhost", 6666);
First, we create a new client and connect it to the server. Next, we connect the StreamReader and StreamWriter instances to the TcpClient, and then we send the name to the server. Finally, we read the response the server sent us and return it. Easy!
Here's some example outputs:
Client:
./NetworkingDemo-Server.exe
Said hello to Bill
Server:
./NetworkingDemo-Client.exe
The server said: Hello, Bill!
The above code should work on Mac, Windows, and Linux. Granted, it's not the most efficient way of doing things, but it should be fine for most general purposes. Personally, I think the trade-off between performance and readability/ease of understanding of code is totally worth it.
If you prefer, I've got an archive of the above code I wrote for this blog post - complete with binaries. You can find it here: NetworkingDemo.7z.
## Picking the right interface for multicast communications
At the recent hardware meetup, I was faced with an interesting problem: I was trying to communicate with my Wemos over multicast UDP in order to get it to automatically discover my PixelHub Server, but the multicast pings were going out over the wrong interface - I had both an ethernet cable plugged in and a WiFi hotspot running on my integrated wireless card.
The solution to this is not as simple you might think - you have to not only pick the right interface, but also the right version of the IP protocol. You also have to have some way to picking the correct interface in the first place.
Let's say you have a big beefy PC with a wireless card and 2 ethernet ports that are (for some magical reason) all in use at the same time, and you want to communicate with another device over your wireless card and not either of your ethernet ports.
I developed this on my linux laptop, but it should work just fine on other OSes.
To start, it's probably a good idea to list all of our network interfaces:
using System.Net.NetworkInformation;
// ...
NetworkInterface[] nics = NetworkInterface.GetAllNetworkInterfaces();
foreach (NetworkInterface nic in nics)
{
Console.WriteLine("Id: {0} - Description: {1}", nic.Id, nic.Description);
}
This (on my machine at least!) outputs something like this:
eth0 - eth0
lo - lo
wlan0 - wlan0
Your machine will probably output something different. Next, since you can't normally address this list of network interfaces directly by name, we need to write a method to do it for us:
public static NetworkInterface GetNetworkIndexByName4(string targetInterfaceName)
{
NetworkInterface[] nics = NetworkInterface.GetAllNetworkInterfaces();
foreach (NetworkInterface nic in nics)
{
if (nic.Id == targetInterfaceName)
return nic;
}
throw new Exception(\$"Error: Can't find network interface with the name {targetInterfaceName}.");
}
Pretty simple, right? We're not out the woods yet though - next we need to tell our UdpClient to talk on a specific network interface. Speaking of which, let's set up that UdpClient so that we can use it to do stuff with multicast:
using System.Net;
// ...
UdpClient client = new UdpClient(5050);
client.JoinMulticastGroup(IPAddress.Parse("239.62.148.30"));
With that out of the way, we can now deal with telling the UcpClient which network interface it should be talking on. This is actually quite tricky, since the UdpClient doesn't take a NetworkInterface directly. Let's define another helper method:
public static int GetIPv4Index(this NetworkInterface nic)
{
IPInterfaceProperties ipProps = nic.GetIPProperties();
IPv4InterfaceProperties ip4Props = ipProps.GetIPv4Properties();
return ip4Props.Index;
}
The above extension method gets the index of the IPv4 interface of the network interface. Since at the moment we are in the middle of a (frustratingly slow) transition from IPv4 and IPv6, each network interface must have both an IPv4 interface, for talking to other IPv4 hosts, and an IPv6 interface for talking to IPv6 hosts. In this example I'm using IPv4, since the Wemos I want to talk to doesn't support IPv6 :-(
Now that we have a way to get the index of a network interface, we need to translate it into something that the UdpClient understands:
int interfaceIndex = (int)IPAddress.HostToNetworkOrder(NetTools.GetNetworkIndexByName4(NetworkInterfaceName));
That complicated! Thankfully, we don't need to pick it apart completely - it just works :-)
Now that we have the interface index in the right format, all we have to do is tell the UdpClient about it. Again, this is also slightly overcomplicated:
beacon.Client.SetSocketOption(
SocketOptionLevel.IP,
SocketOptionName.MulticastInterface,
interfaceIndex
);
Make sure that you put this call before you join the multicast group. With that done, your UdpClient should finally be talking on the right interface!
Whew! That was quite the rabbit hole (I sent my regards to the rabbit :P). If you have any issues with getting it to work, I'm happy to help - just post a comment down below.
### Sources
• Openclipart images: 1, 2
## Creating a HTTP Server in C♯
(Image from here.)
I discovered the HttpListener class in .NET the other day, and decided to take a look. It turns out that it's way easier to use than doing it all by hand as I had to do in my 08241 Networking module a few months ago. I thought I'd write a post about it so that you can see how easy it is and use the same technique yourself. You might want to add a HTTP status reporting system to your program, or use it to serve a front-end GUI. You could even build a real-time app with WebSockets.
As with all my tutorials - this post is a starting point, not an ending point. Refactor it to your heart's content!
To start off, let's create a new C♯ console project and create an instance of it:
using System;
using System.Net;
class MainClass
{
public static void Main(string[] args)
{
HttpListener listener = new HttpListener();
}
}
This should look pretty familiar to you. If not, then I can recommend the C♯ yellow book by Rob Miles. This is all rather boring so far, so let's move on quickly.
listener.Prefixes.Add("http://*:3333/");
listener.Start();
C♯'s HttpListener works on prefixes. It parses the prefix to work out what it should listen on and for what. The above listens for HTTP requests from anyone on port 3333. Apparently you can listen for HTTPS requests using this class too, but I'd recommend putting it behind some kind of proxy like NginX instead and let that handle the TLS certificates instead.
We also start the listener listening for requests too, because it doesn't listen for requests by default.
while(true)
{
HttpListenerContext cycle = listener.GetContext();
Console.WriteLine("Got request for {0} from {1}.", cycle.Request.RawUrl, cycle.Request.RemoteEndPoint);
StreamWriter outgoing = new StreamWriter(cycle.Response.OutputStream);
}
This next part is the beginning of the main request / response loop. The highlighted line is the important one - it retrieves the next pending request, waiting for it if necessary. The following lines log to the console about the latest request, and set up a StreamWriter to make it easier to send the response to the requester.
The final piece of the puzzle actually sending the request and moving onto the next one.
outgoing.WriteLine("It works!");
outgoing.Close();
cycle.Response.Close();
Actually writing the response is easy - all you have to do is write it to the StreamWriter. Then all you have to do is call Close() on the stream writer and the Response object, and you're done! Here's the whole thing:
using System;
using System.Net;
using System.IO;
namespace HttpServerTest
{
class MainClass
{
public static void Main(string[] args)
{
HttpListener listener = new HttpListener();
listener.Start();
while(true)
{
HttpListenerContext cycle = listener.GetContext();
Console.WriteLine("Got request for {0} from {1}.", cycle.Request.RawUrl, cycle.Request.RemoteEndPoint);
StreamWriter outgoing = new StreamWriter(cycle.Response.OutputStream);
cycle.Response.ContentType = "text/plain";
outgoing.WriteLine("It works!");
outgoing.Close();
cycle.Response.Close();
}
}
}
}
(Hastebin, Raw)
That's all you really need to create a simple HTTP server in C♯. You should extend this by putting everything that could go wrong in a try..catch statement to prevent the server from crashing. You could also write a simple find / replace templating system to make it easier to serve files from your new HTTP server. Refactoring the above to make it asynchronous wouldn't be a bad idea either.
Have you been building anything interesting recently? Post about in the comments below!
## TeleConsole: A simple remote debugging solution for awkward situations
Several times in the last few months I've found myself in some kind of awkward situation where I need to debug a C♯ program, but the program in question either doesn't have a console, or is on a remote machine. In an ideal world, I'd like to have all my debugging message sent to my development machine for inspection. That way I don't have to check the console of multiple different machines just to get an idea as to what has gone wrong.
C♯ already has System.Diagnostics.Debug, which functions similarly to the Console class, except that it sends data to the Application output window. This is brilliant for things running on your local machine through Visual Studio or MonoDevelop, but not so great when you've got a program that utilises the network and has to run on separate computers. Visual Studio for one starts to freeze up if you open the exact same repository on a network drive that's already open somewhere else.
It is for these reasons that I finally decided to sit down and write TeleConsole. It's a simple remote console that you can use in any project you like. It's split into 2 parts: the client library and the server binary. The server runs on your development machine, and listens for connections from the client library, which you reference in your latest and greatest project and use to send messages to the server.
Take a look here: sbrl/TeleConsole (GitHub) (Direct link to the latest release)
The client API is fully documented with intellisense comments, so I assume it should be very easy to work out how to use it (if there's something I need to do to let you use the intellisense comments when writing your own programs, let me know!). If you need some code to look at, I've created an example whilst testing it.
Although it's certainly not done yet, I'll be definitely be building upon it in the future to improve it and add additional features.
## Demystifying traceroute
(Image from labnol. View the full map at submarinecablemap.com.)
A little while ago someone I know seemed a little bit confused as to how a traceroute interacts with a firewall, so I decided to properly look into and write this post.
Traceroute is the practice of sending particular packets across a network in order to discover all the different hops between the source computer and the destination. For example, here's a traceroute between starbeamrainbowlabs.com and bbc.co.uk:
traceroute to bbc.co.uk (212.58.244.23), 30 hops max, 60 byte packets
1 125.ip-37-187-66.eu (37.187.66.125) 0.100 ms 0.015 ms 0.011 ms
2 be10-147.rbx-g1-a9.fr.eu (37.187.231.169) 0.922 ms 0.912 ms 0.957 ms
3 be100-1187.ldn-1-a9.uk.eu (91.121.128.87) 7.536 ms 7.538 ms 7.535 ms
4 * * *
5 ae-1-3104.ear2.London2.Level3.net (4.69.143.190) 18.481 ms 18.676 ms 18.903 ms
6 unknown.Level3.net (212.187.139.230) 10.725 ms 10.434 ms 10.415 ms
7 * * *
8 ae0.er01.telhc.bbc.co.uk (132.185.254.109) 10.565 ms 10.666 ms 10.603 ms
9 132.185.255.148 (132.185.255.148) 12.123 ms 11.781 ms 11.529 ms
10 212.58.244.23 (212.58.244.23) 10.596 ms 10.587 ms 65.243 ms
As you can see, there are quite a number of hops between us and the BBC, not all of which responded to attempts to probe them. Before we can speculate as to why, it's important to understand how a traceroute is performed.
There are actually a number of different methods to perform a traceroute, but they all have a few things in common. The basic idea exploits something called time to live (TTL). This is a special value that all IP packets have (located 16 bytes into an ipv4 header, and 7 bytes into an ipv6 header for those who are curious) that determines the maximum number of hops that a packet is allowed to go through before it is dropped. Every hop along a packet's route decreases this value by 1. When it reaches 0, an ICMP TTL Exceeded message is returned to the source of the packet. This message can be used to discover the hops between a given source and destination.
With that out of the way, we can move on to the different methods of generating this response from every hop along a given route. Linux comes with a traceroute utility built-in, and this is the tool that I'm going to be investigating. If you're on Windows, you can use tracert, but it doesn't have as many options as the Linux version.
Linux's traceroute utility defaults to using UDP packets on an uncommon port. It defaults to this because it's the best method that unprivileged users can use if they have a kernel older than 3.0 (check your kernel version with uname -r). It isn't ideal though, because many hosts don't expect incoming UDP packets and silently drop them.
Adding the -I flag causes traceroute to use ICMP ping requests instead. Thankfully most hosts will respond to ICMP pings, making it a much better probing tool. Some networks, however, don't allow ping requests to pass through their gateways (usually large institutions and schools), rendering this method useless in certain situations.
To combat the above, a new method was developed that uses TCP SYN packets instead of UDP or ICMP ping. If you send a TCP SYN packet (manipulating the TTL as above), practically all hosts will return some kind of message. This is commonly referred to as the TCP half-open technique, and defaults to port 80 - this allows the traceroute to bypass nearly all firewalls. If you're behind a proxy though I suspect it'll snag on it - theoretically speaking using port 443 instead should rectify this problem in most cases (i.e. traceroute -T -p 443 hostname.tld).
Traceroute has a bunch of other less reliable methods, which I'll explain quickly below.
• -U causes traceroute to use UDP on port 53. This method usually only elicits responses from DNS servers along the route.
• -UL makes traceroute use udplite in a similar fashion to UDP in the bullet point above. This is only available to administrators.
• DCCP can also be used with the -D. It works similar to the TCP method described earlier.
• A raw IP packet can also be used, but I can't think of any reasons you'd use this.
## Demystifying UDP
Yesterday I was taking a look at [UDP Multicast], and attempting to try it out in C#. Unfortunately, I got a little bit confused as to how it worked, and ended up sending a couple of hours wondering what I did wrong. I'm writing this post to hopefully save you the trouble of fiddling around trying to get it to work yourself.
UDP stands for User Datagram Protocol (or Unreliable Datagram Protocol). It offers no guarantee that message sent will be received at the other end, but is usually faster than its counterpart, TCP. Each UDP message has a source and a destination address, a source port, and a destination port.
When you send a message to a multicast address (like the 239.0.0.0/8 range or the FF00::/8 range for ipv6, but that's a little bit more complicated), your router will send a copy of the message to all the other interested hosts on your network, leaving out hosts that have not registered their interest. Note here that an exact copy of the original message is sent to all interested parties. The original source and destination addresses are NOT changed by your router.
With that in mind, we can start to write some code.
IPAddress multicastGroup = IPAddress.Parse("239.1.2.3");
int port = 43;
IPEndPoint channel = new IPEndPoint(multicastGroup, port);
UdpClient client = new UdpClient(43);
client.JoinMulticastGroup(multicastGroup);
In the above, I set up a few variables or things like the multicast address that we are going to join, the port number, and so on. I pass the port number to the new UdpClient I create, letting it know that we are interested in messages sent to that port. I also create a variable called channel, which we will be using later.
Next up, we need to figure out a way to send a message. Unfortunately, the UdpClient class only supports sends arrays of bytes, so we will be have to convert anything we want to send to and from a byte array. Thankfully though this isn't too tough:
string data = "1 2, 1 2, Testing!";
string message = Encoding.UTF8.GetString(payload);
The above converts a simple string to and from a byte[] array. If you're interested, you can also serialise and deserialise C♯ objects to and from a byte[] array by using Binary Serialisation. Anyway, we can now write a method to send a message across the network. Here's what I came up with:
private static void Send(string data)
{
Console.WriteLine("Sending '{0}' to {1}.", data, destination);
}
{
}
Here I've defined a method to send stuff across the network for me. I've added an overload, too, which automatically converts string into byte[] arrays for me.
Putting the above together will result in a multicast message being sent across the network. This won't do us much good though unless we can also receive messages from the network too. Let's fix that:
public static async Task Listen()
{
while(true)
{
string message = Encoding.UTF8.GetString(result.Buffer);
Console.WriteLine("{0}: {1}", result.RemoteEndPoint, message);
}
}
You might not have seen (or heard of) asynchronous C# before, but basically it's a ways of doing another thing whilst you are waiting for one thing to complete. Dot net perls have a good tutorial on the subject if you want to read up on it.
For now though, here's how you call an asynchronous method from a synchronous one (like the Main() method since that once can't be async apparently):
Task.Run(() => Listen).Wait();
If you run the above in one program while sending a message in another, you should see something appear in the console of the listener. If not, your computer may not be configured to receive multicast messages that were sent from itself. In this case try running the listener on a different machine to the sender. In theory you should be able to run the listener on as many hosts on your local network as you want and they should all receive the same message.
## Reading HTTP 1.1 requests from a real web server in C#
I've received rather a lot of questions recently asking the same question, so I thought that I 'd write a blog post on it. Here's the question:
Why does my network client fail to connect when it is using HTTP/1.1?
I encountered this same problem, and after half an hour of debugging I found the problem: It wasn't failing to connect at all, rather it was failing to read the response from the server. Consider the following program:
using System;
using System.IO;
using System.Net.Sockets;
class Program
{
static void Main(string[] args)
{
TcpClient client = new TcpClient("host.name", 80);
client.SendTimeout = 3000;
StreamWriter writer = new StreamWriter(client.GetStream());
writer.WriteLine("GET /path HTTP/1.1");
writer.WriteLine("Host: server.name");
writer.WriteLine();
writer.Flush();
Console.WriteLine("Got Response: '{0}'", response);
}
}
If you change the hostname and request path, and then compile and run it, you (might) get the following error:
An unhandled exception of type 'System.IO.IOException' occurred in System.dll
Additional information: Unable to read data from the transport connection: A connection attempt failed because the connected party did not properly respond after a period of time or established connection failed because connected host has failed to respond.
Strange. I'm sure that we sent the request. Let's try reading the response line by line:
string response = string.Empty;
do
{
response += nextLine;
Console.WriteLine("> {0}", nextLine);
} while (reader.Peek() != -1);
Here's some example output from my server:
> HTTP/1.1 200 OK
> Server: nginx/1.9.10
> Date: Tue, 09 Feb 2016 15:48:31 GMT
> Content-Type: text/html
> Transfer-Encoding: chunked
> Connection: keep-alive
> Vary: Accept-Encoding
> strict-transport-security: max-age=31536000;
>
> 2ef
> <html>
> <body bgcolor="white">
> <h1>Index of /libraries/</h1><hr><pre><a href="../">../</a>
> <a href="prism-grammars/">prism-grammars/</a>
09-Feb-2016 13:56 -
> <a href="blazy.js">blazy.js</a> 09-F
eb-2016 13:38 9750
> <a href="prism.css">prism.css</a> 09-
Feb-2016 13:58 11937
> <a href="prism.js">prism.js</a> 09-F
eb-2016 13:58 35218
> <a href="smoothscroll.js">smoothscroll.js</a>
20-Apr-2015 17:01 3240
> </pre><hr></body>
> </html>
>
> 0
>
...but we still get the same error. Why? The reason is that the web server is keeping the connection open, just in case we want to send another request. While this would usually be helpful (say in the case of a web browser - it will probably want to download some images or something after receiving the initial response), it's rather a nuisance for us, since we don't want to send another request and it's rather awkward to detect the end of the response without detecting the end of the stream (that's what the while (reader.Peek() != -1); is for in the example above).
Thankfully, there are a few solutions to this. Firstly, the web server will sometimes (but not always - take the example response above for starters) send a content-length header. This header will tell you how many bytes follow after the double newline (\r\n\r\n) that separate the response headers from the response body. We could use this to detect the end of the message. This is the reccommended way , according to RFC2616.
Another way to cheat here is to send the connection: close header. This instructs the web server to close the connection after sending the message (Note that this will break some of the tests in the ACW, so don't use this method!). Then we can use reader.ReadToEnd() as normal.
A further cheat would be to detect the expected end of the message that we are looking for. For HTML this will practically always be </html>. We can close the connection after we receive this line (although this doesn't work when you're not receiving HTML). This is seriously not a good idea. The HTML could be malformed, and not contain </html>.
Art by Mythdael |
# How to prove stuff about linear algebra?
1. Sep 24, 2005
### *melinda*
How to prove stuff about linear algebra???
Question:
Suppose $(v_1, v_2, ..., v_n)$ is linearly independent in $V$ and $w\in V$.
Prove that if $(v_1 +w, v_2 +w, ..., v_n +w)$ is linearly dependent, then $w\in span(v_1, ...,v_n)$.
To prove this I tried...
If $(v_1, v_2, ..., v_n)$ is linearly independent then $a_1 v_1 + ...+a_n v_n =0$ for all $(a_1 , ..., a_n )=0$.
then,
$a_1 (v_1 +w)+a_2 (v_2 +w)+...+a_n (v_n +w)=0$
is not linearly independent, but can be rewritten as,
$a_1 v_1 + ...+a_n v_n +(\sum a_i )w=0$
so,
$a_1 v_1 + ...+a_n v_n = -(\sum a_i )w$.
Since $w$ is a linear combination of vectors in $V$, $w\in span(V)$.
Did I do this right?
Is there a better way of doing this?
Any input is much appreciated!
2. Sep 24, 2005
### LeonhardEuler
Your proof is pretty much correct, but in this sentence:
I think you mean to say:
If $(v_1, v_2, ..., v_n)$ is linearly independent then $a_1 v_1 + ...+a_n v_n =0$ only when each $a_i=0$
3. Sep 24, 2005
### *melinda*
Yes, that would make a bit more sense. Sometimes I understand what I mean to do, but don't know how to say it.
Thanks a bunch!
4. Sep 24, 2005
### Hurkyl
Staff Emeritus
This is wrong. If the collection of vectors is independent, and if $a_1 v_1 + ...+a_n v_n =0$ then $a_1 = a_2 = \cdots = 0$.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Have something to add? |
# How to keyframe_insert without moving the object
I tried the following code:
import bpy
obj = bpy.data.objects['Cube']
obj.keyframe_insert(data_path='location', frame=0)
obj.location.z += 5
obj.keyframe_insert(data_path='location', frame=100)
Then, after executing the code, the Cube position moves in the Z direction.
But, I want to insert keyframe and not move the object on the screen.
How can I insert keyframes without moving objects on the screen?
• at the beginning of your code, you can "current_frame = bpy.context.scene.frame_current" and at the end "s.frame_set(current_frame)" – lemon Oct 20 at 12:25
• @lemon I think you should post that as an answer. – Robert Gützkow Oct 20 at 13:18
You can surround your code with something like:
current_frame = bpy.context.scene.frame_current
obj = bpy.data.objects['Cube']
obj.keyframe_insert(data_path='location', frame=0)
obj.location.z += 5
obj.keyframe_insert(data_path='location', frame=100)
s.frame_set(current_frame)
So that the current (or the frame you want) is restored once the key frames are inserted.
Though, that does not guaranty that the object will not move, as its movement is dependent on the key frames you have set.
To keyframe an object without moving the object or changing the current frame, you can create a keyframe with the current location and then get the keyframe data and change it to the desired position.
import bpy
def key_z_at_frame(o, at_frame, loc):
o.keyframe_insert(data_path='location', frame=at_frame)
act = o.animation_data.action
fc = act.fcurves.find('location', index=2)
for kp in fc.keyframe_points:
if kp.co[0] == at_frame:
kp.co[1] = loc
obj = bpy.context.object
cur_frame = bpy.context.scene.frame_current
key_z_at_frame(obj, cur_frame, obj.location.z)
key_z_at_frame(obj, 100, 5)
If the frame you want to add is always going to be at the highest frame number, you could replace the loop with
fc.keyframe_points[-1].co[1] = loc |
Paper Abstract
Title
The Global Dimension of Schur Algebras for $\GL_2$ and $\GL_3$
Abstract
We first define the notion of good filtration dimension and Weyl filtration dimension in a quasi-hereditary algebra. We calculate these dimensions explicitly for all irreducible modules in $SL_2$ and $SL_3$. We use these to show that the global dimension of a Schur algebra for $GL_2$ and $GL_3$ is twice its good filtration dimension. To do this for $SL_3$, we give an explicit filtration of the modules $\nabla(\lambda)$ by modules of the form $\nabla(\mu)^{\mathrm F} \otimes L(\nu)$ where $\mu$ is a dominant weight and $\nu$ is $p$-restricted. |
# Why does phosphorus give a free electron in silicon doping?
Extrinsic semiconductors are created by doping an intrinsic semiconductor; typically silicon is doped with phosphorus to create free electrons or boron to create "holes".
In the case of phosphorus doping, phosphorus takes the place of a silicon atom in the silicon lattice, and its fifth electron becomes a free electron.
Here is what I am struggling to understand with my basic knowledge. Phosphorus has the configuration $\ce{[Ne] 3s^2 3p_x^1 3p_y^1 3p_z^1}$.
This page gives an description of the promotion of a 3s electron to a 3d shell, allowing five bonds to form in the case of $\ce{PCl5}$. I would assume that a similar process results in four bonds forming in the silicon lattice and leaving a free electron.
But that same page later on mentions a 2007 paper which essentially says that that model is not accurate, because such a state is less stable than other ways of modeling the bond (assuming I'm understanding correctly. The problem is described here, I have not read the actual paper.
What causes phosphorus to form four bonds with silicon and leave a free electron rather than, say, forming three bonds and leaving a silicon with an unpaired electron in the lattice, or bonding with another "free" electron? Is it promoting an electron to form the fourth bond and freeing the last?
In short: Why does phosphorus form four Si bonds and leave a free electron rather than forming three Si bonds when doping silicon? |
# graphicx includegraphics globally centered
Is it possible to globally set \includegraphics pictures to be centered horizontally on page?
I have read the manual and see some unfamiliar global settings keys: \setkeys{Gin}{width=0.75\textwidth}
\includegraphics in common with most similar LaTeX constructs such as \parbox, tabular etc, is not a display construct it is just an inline box, so positioned like a big letter, it has no mechanism to control its position. You center a graphic in the same way as you would center a word, place it in the scope of \centering or \begin{center}...\end{center}
• Thanks for that explaination. I have issues with pandoc converter to word and had to delete all \center to get passed... Mar 5 '19 at 22:37
• @novski you should never use \center that is the internal implementation of \begin{center} always use the environment form or use \centering Mar 6 '19 at 7:57
• im sorry i misslead this comment as it was \centering that led to errors in pandoc conversion. Mar 15 '19 at 7:40
I'm assuming that your margins are symmetric on the page. If not, say so.
\documentclass{article}
\usepackage[showframe,pass]{geometry}
\usepackage{graphicx}
\begin{document}
\centerline{\includegraphics[width=.3\textwidth]{example-image}}
\centerline{\includegraphics[width=1.1\textwidth]{example-image}}
\end{document}
• Thanks. I was hopeing for a preamble to put in my style file bus as David Carlistle lined out rhere is no such option... Mar 5 '19 at 22:39 |
# Difference in height of subscript [duplicate]
I have a problem concerning the following equation in LaTeX:
A_{1}^{\ast} A_{1} = A_{1} A_{1}^{\ast},
where the 1's under the A's are placed at different heights. Does anyone know a simple way to fix this?
-
## marked as duplicate by mafp, lockstep, zeroth, Werner, Andrew SwannFeb 13 '13 at 15:38
See, e.g., Subscripts for primed variables (possible duplicate?) – Hendrik Vogt Feb 13 '13 at 14:24
I couldn't find this page and it does provide an easy way to fix it (using the package subdepth), so this is an answer. – Vincent Feb 13 '13 at 14:27
\documentclass{article}
$A_{1}^{\ast} A_{1}^{\phantom{\ast}} = A_{1}^{\phantom{\ast}} A_{1}^{\ast}$
\vphantom would be better than just \phantom -- look at the spacing between the subscript on the first A and the start of the second A in each pair to see why. not a big difference here, but it could be in other situations. – barbara beeton Feb 13 '13 at 22:17 |
# Given a set, we can always form a magma or how to think about (minimal) algebraic structures?
I've been told that algebra is about abstracting already existing structures and not the other way around. "What is the properties of the integers, can we generalize these properties in a more abstract sense?" We must have something to work with and not just making up axioms which nothing satisfies. According to a lecturer I had, working with abstract "objects" with no connection to existing structures is pretty much just set theory.
Anyhow.
If we flip the question; when does a set have algebraic structure? I'm aware that this the wrong way to think about it but bear with me for a second. To make this question a bit more precise I define a set to have a minimal algebraic structure if we have at least one well defined binary operator on the set. So my question becomes:
When does a set have a at least one well defined binary operator?
My naive and uninformed attempt at tackling this problem:
Given a nonempty set $S$, we can always form a minimal algebraic structure on $S$, that is $S$, can always be a magma:
Let $S$ be a nonempty set. Define the following binary operator on $S$:
$\phi: S \times S \to S$ such that $a\phi b = a, \forall a,b \in S$
Even though $\phi$ is very boring and kind of trivial, it works, right?
For example let's consider $P= \left\{p \in \mathbb{Z} \colon p \text { is prime} \right\}$ and $I = \mathbb{R} \setminus \mathbb{Q}$, two sets which at least in my mind seems do be fairly unalgebraic, but my $\phi$ is well defined on both? If this holds then $P$ and $I$ are in fact two (very trivial) magmas, so algebraic structure exists or rather can be defined onin these sets.
Can I have a set in which my $\phi$ is not well defined?
Is this nonsense? (in the sense: is this outright just wrong)
• I'm sorry if this a fuzzy question. I'm not sure I perhaps should have added the soft question and or the elementary set theory tag. – John Smith Dec 15 '13 at 13:17
Your $\phi$ defines on the set the structure of a semigroup (it is a well-known semigroup of left zeroes).
Another "quasi-minimal" example: fix an element $a$ of the set and define $xy=a$ for all $x,y$ ("quasi" since we have to fix). Then we get a semigroup with zero multiplication.
It seems that only "minimal" magma is a semigroup.
• So any set can be a semigroup? – John Smith Dec 15 '13 at 13:32
• You can put a semigroup structure on any set. The way that algebraic structures like semigroups are defined means that you can use any set you want for elements, modulo cardinality concerns. Obviously some sets make things easier to say than others. – Charlotte Aten Jun 20 '16 at 3:01
Can I have a set in which my ϕ is not well defined?
No, I think your $\phi$ allows one to put a trivial magma structure on every non empty set.
Is this nonsense?
I don't see how this could be useful, in any way.
• I'm well aware of the fact that this is not insightful or useful. I just made it up to aid my understanding of the existence of algebraic structures. I apologize though. – John Smith Dec 15 '13 at 13:34
• Why should you apologize? I just answered your question w.r.t. my knowledge. It may be that I'm wrong and this is indeed useful. Let's wait for the experts' opinions ;-) – Abramo Dec 15 '13 at 13:37 |
## Algebra: A Combined Approach (4th Edition)
$x∈(-6;6)$ To graph the solution set, put two parenthesis to enclose the set $(-6;6)$. |
# Resistance between 2 points in infinite 3-D gas volume
1. Mar 29, 2016
### Roger44
Hi
Back in 2011 here https://www.physicsforums.com/threa...n-an-infinite-volume-of-resistive-gas.513388/ the question of the resistance between two points in an infinite volume of resistive gas was raised but petered out without a solution.
The solution could be the conductance of a simple rod of gas linking the two electrodes multiplied by a Shape Factor of 4 pi r /(1 - (r/d)^2 - (r/d)^4 - 2r/d )
where r is the radius of the two spherical electrodes and d their separation. For example, if the electrodes are of radius 1 cm and 10 cm apart, the current would be about 15 times what it would be if two electrodes were just linked by a rod of 2cm diam. My results need checking.
This result comes from thermal conductivity, for example http://www.mhhe.com/engcs/mech/holman/graphics/samplech_3.pdf, where this case would be the equivalent of the thermal conduction between two spheres buried in an infinite 3-D medium. See at the bottom of page 79. I've no idea how they get to the above result.
For conduction between two ROUND conductors on an infinite 2-D plane the Shape factor is a much simpler coshine function and at the end of this rather messy thread https://www.physicsforums.com/threa...-two-point-voltages-on-infinite-plane.832960/ you can find the maths that get to this result .
2. Apr 4, 2016 |
# Nyquist criterion
When using the Nyquist stability criterion, amplitude-frequency characteristic etc. we go from the Laplace image $G(s)$ to $G(j\omega )$. By definition of the Laplace transform, $s=\sigma + j\omega$. So why is $\sigma$ (real part of $s$) approximated as $0$ or whatever the case might be?
-
Did you see Wikipedia entry section Nyquist stability criterion Mathematical Derivation? – Américo Tavares May 4 '13 at 12:01
If $G(s)$ is the laplace transform of $g$, then $F(\omega)=G(i\omega)$ is the fourier transform of $g$, if the fourier transform exists. You can easily see that by comparing the formulas for the two transforms - in the case of the laplace transform the kernel is $e^{-s}$ whereas for the fourier transform it's $e^{-i\omega}$. (Since you usually assume that $fg(x)=$ for $x < 0$ when using the laplace transform, the fact that the integration boundaries are different doesn't matter).
This explains (at least if the fourier transform exists) why $G(i\omega)$ gives you the amplitude-frequency-characteristic $g$ - it's simply the fourier transform of $g$.
BTW, if you laplace transform some $g$ for which the fourier transform does not exists, you often still end up with an algebraic expression which does, in fact, evaluate to finite values even for $s=i\omega$. If this happen, what you get is not the fourier transform, but rather the analytic extension of $G$ to the imaginary axis. You have to be very carefull in this case with drawing conclusions about the behaviour of $g$ from $G(i\omega)$. |
## Theory of Complexity – Why Cook-Levin Thorem's Evidence May Mean NP-Hardness
I'm studying Cook-Levin's theorem, but I've encountered a problem. The Cook-Levin theorem shows that any NPTM can be encoded as a Boolean formula. About the given language $$A$$example $$w$$and NPTM $$M$$ Who's deciding $$A$$, I understand that when a Boolean formula $$φ$$ is true when $$M$$ accepted $$w$$ and wrong when $$M$$ rejects $$w$$, decision problem $$w∈A?$$ will be the same problem as $$φ = true?$$. But I am confused as to the fact that we can certify the existence of a reduction of any other problem from NP to SAT, with Cook-Levin's theorem. I can not make a real reduction from Karp to SAT without specifying the origin of the reduction, so I can not say that the SAT is in NP-Hard. Please, help me understand why conversion to Boolean formula can mean NP-Hardness of SAT … |
# Hash function for a key of type structure
I need to implement a hash function for a key of type structure. This structure is as follows:
typedef struct hash_key_s {
void *k1;
char *k2;
char *k3;
} hash_key_t;
My proposed hash function is as follows:
unsigned long hash_func (void *key)
{
hash_key_t *k = (hash_key_t *) key;
unsigned long h = 0;
char c = 0;
char *p = k->k2;
if (p != NULL) {
for (c=0; c=*p; p++) {
h = h*31 + c;
}
}
p = k->k3;
if (p != NULL) {
for (c=0; c=*p; p++) {
h = h*31 + c;
}
}
h = h*31 + (((unsigned long) k->k1) >> 2);
return h*110351524UL;
}
I just want to know if it is a good hash or if I can improve it further.
• Those for loops look completely broken. – twohundredping Dec 13 '16 at 9:49
• @twohundredping: Thanks for the comment. It didn't compile. I made the changes. – Vivek Agrawal Dec 13 '16 at 12:51
## 1 Answer
"Is this a good hash function?" is not a question that can be answered in isolation. It depends greatly on the expected distribution of the inputs and on the desired distribution of its output. Do you want each bit of the output to be equally likely to be 0 or 1, for example?
Here, I'm going to assume that you want the minimal requirements for a hash function - maximum likelihood of different outputs for different inputs, and no requirements for cryptographic-strength collision resistance.
It's interesting that you're combining the content pointed to by k2 and k3 but that value of the pointer k1 - a comment is needed to explain why this is what you want.
I assume that you are constrained to the signature unsigned long (*)(void*) by some library. A pointer to const void would be better as the argument, as your hash function really mustn't modify its argument. Failing that, you can declare k as pointer-to-const:
hash_key_t const *const k = key;
(It's C, so void* will promote to any other pointer).
The repeated code to hash the two strings can be extracted into a function:
static unsigned long hash_string(const char *s, unsigned long h)
{
if (!s) return h;
while (*s)
h = h * 31 + *s++;
return h;
}
unsigned long hash_func (void *key)
{
const hash_key_t *const k = key;
unsigned long h = 0;
h = hash_string(k->k2, h);
h = hash_string(k->k3, h);
h = h*31 + (unsigned long)k->k1;
return h*110351524UL;
}
I removed the right-shift of k1 in the above - that doesn't add any extra entropy to the hash, and may (on some platforms) reduce it. But I have left your magic constant untouched - see next section.
An aspect of your algorithm that's a little suspect is the final constant. It is an even number - worse, it's a multiple of 4; this is throwing away two bits from your result. I'm not convinced there's a good reason to multiply the final result - you'll need to justify it given the expected inputs and desired distribution of result.
Because you effectively concatenate the two strings, you'll get collisions between "ab", "c" and "a", "bc". That may or may not be a problem for your inputs, but it's worth noting in a comment in case you want to repurpose this in future. If it's undesirable, you could either perform another multiplication between the strings or you could provide a different per-character constant to each invocation.
Rather than reinventing the wheel, you may be able to re-use (or at least copy) an existing algorithm. For example, if your code is GNU GPL compatible, here's the glibc implementation:
/* We assume to have unsigned long int' value with at least 32 bits. */
#define HASHWORDBITS 32
/* Defines the so called hashpjw' function by P.J. Weinberger
[see Aho/Sethi/Ullman, COMPILERS: Principles, Techniques and Tools,
1986, 1987 Bell Telephone Laboratories, Inc.] */
static inline
unsigned long int
hash_string(const char *str_param)
{
unsigned long int hval, g;
const char *str = str_param;
/* Compute the hash value for the given string. */
hval = 0;
while (*str != '\0') {
hval <<= 4;
hval += (unsigned long int) *str++;
g = hval & ((unsigned long int) 0xf << (HASHWORDBITS - 4));
if (g != 0) {
hval ^= g >> (HASHWORDBITS - 8);
hval ^= g;
}
}
return hval;
}
• Thanks for the detailed explanation. I could not grasp the whole as of now and therefore will answer your questions after understanding each part of your answer. Thanks for the answer :) – Vivek Agrawal Dec 13 '16 at 12:54
• Yes, your assumption is right that I need minimal requirement. No cryptographic-strength collision resistance. I'm using this for insertion in hash data structure. Combination of k1, k2 and k3 makes a unique key. Therefore I used k1 as one of the member of key structure. Since I just need a pointer value so used "void *" as a datatype for k1. Why to expose what sort of data structure it is pointing to when I'm not going to see the data in that? That's the main motivation of having it as "void *". – Vivek Agrawal Dec 15 '16 at 13:28
• Thanks for suggesting hash_string() usage and pointing to concatenation bug. For concatenation, I've used a delimiter character and used it in hash calculation. I had the temptation to fix that before posting here, but I think I ignored it just being a bit lazy. But after your comment, I couldn't ignore it more. :) Lastly, my code is not GNU GPL compatible, so can't use that in my code. But thanks for pointing to the source code. It is really great to learn from them. :) I really appreciate your time for this review. It helped. :) – Vivek Agrawal Dec 15 '16 at 13:39 |
## Contents
A&A 420, 809-820 (2004)
DOI: 10.1051/0004-6361:20035703
## Modelling photoluminescence from small particles
### I. General formalism and a simple reference implementation
G. Malloci1,2,4 - G. Mulas1,4 - P. Benvenuti2,3
1 - INAF - Osservatorio Astronomico di Cagliari - AstroChemistry Group, Strada n.54, Loc. Poggio dei Pini, 09012 Capoterra (CA), Italy
2 - Dipartimento di Fisica, Università degli Studi di Cagliari, S.P. Monserrato-Sestu Km 0.7, 09042 Cagliari, Italy
3 - INAF, Viale del Parco Mellini 84, 00136 Roma, Italy
4 - Astrochemical Research in Space Network http://www.ars-network.org
Received 17 November 2003 / Accepted 23 February 2004
Abstract
We developed a general recipe able to extrapolate the expected photoluminescence of small particles starting from available laboratory results obtained on bulk samples. We present numerical results for the simplest case, namely a spherical, homogeneous dust particle, in the limit of strongly localised fluorescence. In particular, our theoretical derivation produces an explicit, analytical, dependence of the photoluminescence spectrum on the angle between the direction of observation and that of the incoming exciting light. We expect the present model to be a useful tool that will allow the study of photoluminescence phenomena of interstellar dust to move beyond the straight, plain, possibly misleading comparison with experimental data on bulk samples.
Key words: ISM: dust, extinction - ISM: lines and bands - methods: numerical - radiation mechanisms: general - radiative transfer - scattering
### 1 Introduction
Interstellar dust is thought to be a diluted dispersion of subm-sized solid grains, probably with a complex, multi-layered, and fluffy structure (see e.g., Mathis & Whiffen 1989; Li & Greenberg 1997). Regardless of their actual shape, such small particles are bound to have optical properties that are very different from those of the bulk material they are made of, the difference being about as large as that between the respective extinction properties, just for the same reasons. Therefore, any attempts to quantitatively compare photoluminescence (PL) signals from interstellar particles with experimental data obtained on bulk samples must carefully take into account the effect of particle size and optical properties. In particular, PL will undergo self-absorption and scattering within the dust particle before leaving it, and unless one is considering dust grains that are utterly non-absorbing at the wavelength of the PL, this effect is far from negligible.
Hence, for meaningful quantitative results, one ought to compare astronomical interstellar PL phenomena, such as the Extended Red Emission (see e.g., Smith & Witt 2002; Duley 2001), with laboratory data taken on samples as similar as possible to interstellar dust. If this is not feasible, one should use a detailed PL model to bridge the gap between experimental bulk properties and small particles, very much in the same way as the extinction properties of dust are usually computed from the knowledge of the complex refractive index of the material they are made of (see e.g., Bohren & Huffman 1998). We present here such a model, in which we represent the emission as stemming from a density of uncorrelated oscillating electric dipoles, whose distribution is a functional of the locally absorbed energy. We expect this representation to be valid as long as the particle being modelled can be described in terms of classical electromagnetic theory (i.e. quantum effects are not taken into consideration at all). The details of our theoretical approach and its limits of applicability are described in Sect. 2. As a quantitative test, we show its specific implementation for a homogeneous, isotropic sphere in Sect. 3, where we present numerical results. For a realistic "proof of concept'' test, we used the optical properties of processed organic refractory residues which are thought to be abundant in the diffuse interstellar medium. Section 5 draws the main conclusions of the present, germinal work, and outlines the directions of its foreseen development. Finally, formal derivation details are reported in Appendix A.
### 2 The model
The starting point of our model is the assumption that the photoluminescence (PL) power emitted by a given volume element dV is a functional of the electromagnetic energy absorbed in a neighbourhood around it.
As a first step, we thus need to be able to thoroughly solve the problem of the scattering and absorption of an electromagnetic wave hitting the dust particle under consideration. In particular we need to know the divergence of the Poynting vector associated to the Fourier component of angular frequency of the electromagnetic fields within the particle. This quantity is commonly referred to as the source function in classical electromagnetic theory (Dusel et al. 1976). is the power absorbed by the unit volume dV at position from the Fourier component of the fields of angular frequency between and . From the knowledge of we can define the photon absorption rate per unit angular frequency and unit volume as:
(1)
Every absorbed photon of energy higher than a given threshold will give rise to one or more PL photons. We now define as the average number of PL photons of frequency between and emitted by a volume element centred around the position in the grain, following the absorption of a photon of frequency at the position . To some extent, this may be thought of as a probability, but this naïve interpretation breaks down in the case where one absorbed photon may give rise to the emission of several PL photons. This representation clearly assumes absorption-emission sequences to be independent, i.e. the material is supposed to be able to completely relax after each absorption, thus nonlinear effects are completely neglected. This is obviously appropriate for solid state PL in interstellar environments.
In general, p will vanish for large distances between and ; the actual size s of the neighbourhood of which contributes to PL coming from it will depend on the material. The specific behaviour of p in will thus depend both on the material and on the shape of the sample in this whole neighbourhood. If s is small with respect to the size of the sample, and if the sample is large enough to have "bulk'' optical properties, boundary effects can be neglected in all but a thin "skin'' close to the surface, and p will only depend on the difference ; if the material is furthermore isotropic (which we will assume), p will only depend on . We remark that theoretical computations of the density of states in clusters of atoms yield results virtually undistinguishable from the bulk for atom numbers as small as a few hundreds, the specific value depending on the material considered (see e.g., Jena et al. 1992). This means that the depth of the "skin'' with optical properties significantly different from the bulk will typically be a few molecular layers, i.e. of the order of a nanometer. Therefore, unless PL is dominated by the surface, it is generally safe to assume bulk optical properties for particles significantly larger than 10 nm.
The integral of p over and V' will yield the overall efficiency of PL following the absorption of a photon of a given energy in a given part of the dust grain. This cannot exceed unity, unless multiple photons can be emitted upon the absorption of one. In real life situations, this yield is usually much lower than unity, since even exceedingly efficient fluorescent materials usually show efficiencies of the order of 20% at most. The detailed variation of p on the distance will depend on the properties of the material, but it becomes very simple in the following two extreme cases:
1.
if s is much smaller than the characteristic sizes of the sample considered and the wavelengths involved, the excitation of PL is effectively local, i.e. may be considered to be proportional to a Dirac delta with respect to ; the condition for this to be a good approximation can be quantitatively expressed as , being the propagation constant in the material considered, whose complex refractive index is N1;
2.
on the other extreme, if s is much larger than the the size of the sample, any spatial correlation between absorption and emission is lost, and we will define the resulting PL as "pseudo-thermal'', for reasons that will be made clear later.
The above limiting cases, both of which make the problem tractable, may be actually applicable to real physical situations: Hydrogenated Amorphous Carbon (HAC), as treated by Robertson (1996) and Seahar & Duley (1999), fits neatly in the first, since the excitation energy of an absorbed photon is thought to be confined in the graphitic platelet in which it was absorbed, the same graphitic platelet then emitting the resulting PL photon in a very short time, of the order of 10-8 s; band-gap PL by a homogeneous semiconductor, instead, may be an example of the second limiting case, as the absorption of a photon creates an electron-hole pair whose mean free path may easily be larger than the dimension of even a not too small dust grain.
We emphasise that we cite the two cases above merely as possible examples of the practical applicability of our approach, which is by no means limited to them. The present approach would not fail even if one or both of the cited examples should fail to fulfil the requirements for its applicability, which are indeed clearly and quantitatively stated. It would be proven to be inapplicable only if an example should be found which indeed fulfils the stated limits of applicability and shows a behaviour in contrast with our calculations. We will here restrict ourselves to these two limiting cases.
#### 2.1 The pseudo-thermal case
The simplest case is that of "pseudo-thermal'' PL. In this case, does not depend on at all, but only on , and on the material and geometry of the sample. Multiplying p by as defined in Eq. (1) and integrating over all and over the volume of the particle yields the rate of emission of PL photons per unit emitting volume and unit frequency interval , produced by absorption over the whole particle at all frequencies:
(2)
Since does not depend on , the integral over only operates on to yield the total number of photons of frequency between and absorbed by the whole grain, to obtain
(3)
In turn, depends neither on nor on , and is obviously independent of polarisation. These properties are also enjoyed by the rate of thermal emission of photons per unit volume and unit frequency interval. In thermal emission, can be derived from the principle of detailed balance, and must equal the number of thermal background photons of that same frequency interval absorbed per unit time by the same unit volume of the dust particle. This, in turn, equals the flux of blackbody photons inside the dust grain times the absorption coefficient of the material composing it, times the whole solid angle , namely
(4)
where
and is the imaginary part of the refractive index of the dust grain N1, which is frequency-dependent. Inverting Eq. (4) we obtain:
(5)
is proportional to the Planck function and is given by:
where m1= N1/N is the relative refractive index and denotes the real part of its argument. The resulting overall rate per unit frequency interval of emission of thermal photons from the particle is given by the Kirchhoff law:
(6)
where is the absorption cross-section of the grain for incident light of frequency . This overall emission is the result of the uniform thermal emission inside the dust grain, attenuated by self-absorption. Using the above equations we obtain
and thus
(7)
The effect of self-absorption is therefore completely contained in the factor multiplying . Equation (7) relates the probability of emission of a single photon of a given frequency from a given position in the grain to the probability it has to emerge, under the hypotheses that the probability of emission is uniform inside the grain and that the optical properties of the grain are known. Since and have exactly the same dependence on the position inside the dust grain, namely no dependence, the effect of self-absorption must be exactly the same, provided that the optical properties do not change, giving for the overall PL in this case:
(8)
hence the name "pseudo-thermal'' with which we labelled this case. The crucial step in the derivation above is that is constant inside the particle, which is only reasonable if the dust grain is considered homogeneous. In particular, the assumption of homogeneity implies that the complex refractive index is assumed to be constant and equal to its bulk value, which in turn means neglecting boundary effects. The effect of self- absorption, in this case, is given mainly by two parts, which can be very simply interpreted from a physical point of view: the factor in the denominator says that if the material is strongly absorbing, only a sheet below the surface of thickness of the order of a few times will contribute to PL; the other terms, particularly the term, stem from the effect that size and shape of the dust grain have on its optical properties, in a manner exactly analogous to the way they affect extinction.
Equation (8) is of limited practical use, since itself can have, in principle, a nontrivial dependence on particle size and shape, which can only be obtained by direct measurements on appropriately sized and shaped samples. It is elegant in that it does show explicitly that size and shape effects can be quite relevant in this limiting case of pseudo-thermal PL and thus measures on bulk laboratory samples are bound to be useless for direct comparison with astronomical PL stemming from small particles.
#### 2.2 The local case
The other extreme case which, although much more complicated, is still tractable, is the "local'' one, in which each PL photon is emitted from the same position in the dust grain in which the corresponding exciting photon was absorbed. In this case, due to the extreme confinement of the excitation energy, we assume:
(9)
where is a three-dimensional Dirac delta. Therefore, we may proceed in a manner analogous to Eq. (2) to define:
= = (10)
To be able to calculate the effect of self-absorption on the local PL, including interference phenomena due to the particle size and shape, we need an explicit representation of the electromagnetic fields produced in each elementary emission process. We will represent PL emission as a distribution of oscillating electric dipoles: for each frequency interval we associate to each volume element in the dust particle a collection of incoherently oscillating electric dipoles, oriented in the direction . We will call this quantity, which has the dimensions of dipole moment per unit volume, unit frequency interval and unit solid angle
(11)
This distribution of dipoles will be defined by requiring it to produce the local PL emission , as defined in Eq. (10).
Such a representation will be obviously appropriate if PL is indeed due to electric dipole permitted transitions, while a different representation, such as a density of (possibly higher order) electric or magnetic oscillating multipoles, would be more appropriate for a different transition. We remark, however, that in the case of absorption an adequate description can usually be obtained just in terms of the complex refractive index which, for non-magnetic materials, is determined only by the electric polarisability. There is no reason a priori why such a simplified representation should not be just as adequate to describe PL in the same material; however, in any case, using a more detailed representation would pose no significant conceptual problems, but merely complicate the practical calculations involved.
A single electric dipole , oscillating with frequency inside the grain, radiates a power given by:
(12)
where is the real part of the refractive index N1 and is the magnetic permeability of the medium composing the particle. This is the expression given by Jackson (1998) for a dipole placed in vacuum, modified to be valid for arbitrary dielectric media, the difference being in the factor . We can use Eq. (12) to calculate the power and the photon emission rate per unit volume, unit frequency and unit solid angle stemming from our dipole distribution, to yield
(13)
and
(14)
For this distribution of electric dipoles to represent the PL emission, we must require:
(15)
The distribution of orientations of the dipoles can, in principle, be very complicated, containing detailed information on the microscopic structure of the material and a nontrivial correlation between the direction of the electric field of the exciting wave and the orientation of the excited emitting dipole. If e.g. the material is a molecular solid, this will include the distribution of the orientation of the molecules, the orientation of the transition dipole moment(s) relevant for the absorption, the orientation of the transition dipole moment(s) relevant for the PL emission. This can cause a very significant correlation between the direction of the electric field in the incoming wave and in the outgoing PL emission (Rusli et al. 1996). This effect is maximum if the exciting light is polarised to begin with, and if the transition dipole moment(s) relevant for the absorption and the transition dipole moment(s) relevant for the PL emission are parallel. This is true, in particular, if one considers PL emission excited by a polarised laser beam and examines PL emission at a frequency very close to that of excitation, maximising the probability that the fluorescent transition be the same that caused the absorption: in this case, the transition moments are obviously coincident. For astrophysical applications, however, we are interested in PL excited by natural light, hence non-monochromatic and unpolarised, in an amorphous molecular solid, such as the refractory organic residue which is thought to be produced by energetic processing of interstellar ices (see e.g. Greenberg & Pirronello 1991). In such a material, there is obviously no preferential orientation of the molecules, which would otherwise show up as birefringence as well. As to the specific case of the Extended Red Emission, this is known to be excited in the UV and emitted in the visible (Smith & Witt 2002). To wrap things up, in the organic clusters which make up the organic refractory residue, UV absorption is due to the superposition of a huge number of differently polarised electronic transitions, all of them contributing to the subsequent PL emission. For these reasons, we will here make the admittedly drastic simplification that any polarisation information be lost between absorption and the subsequent PL emission. The detailed study of possible polarisation effects will be dealt with in a subsequent paper.
In the particularly simple case in which the dipole distribution is isotropic, the integral over amounts to a simple multiplication by , hence from Eqs. (14) and (15) we obtain the relation:
(16)
Our approach closely resembles the semi-classical formalism of inelastic scattering by single molecules embedded in small particles, as developed in several papers (Kerker et al. 1978a; Wang et al. 1980; Pendleton & Hill 1997; Chew et al. 1976a; Kerker & Druger 1979; Videen et al. 1991; Hill et al. 1996; Kerker et al. 1978b; Chew et al. 1976b). In this theory induced emission is represented in two separated steps. In the first, a molecule located at a particular position inside the grain is excited by the absorption of a photon at the incident frequency . The second step is the emission of radiation at the shifted frequency by the active molecules at any location within the particle; this emission is described by the electromagnetic field of an induced dipole placed at the same location.
We now need to determine the electromagnetic fields leaving the dust grain, given an oscillating dipole moment located at a specified position inside it. This problem is completely analogous to that of calculating absorption and scattering by a dust particle, given its optical properties and an incoming plane wave. The transmitted fields outside the particle are expressed as (, ), while the internal fields (, ) are decomposed into the sum:
(17)
between the fields ( , ) produced by the oscillating dipole and the scattered fields ( , ) inside the particle. and are the fields produced by an oscillating dipole embedded in an unlimited medium characterised by the refractive index N1, and therefore include the contribution by the induced dipoles in the medium. The fields and are essentially due to reflections at the boundary of the particle. Indeed, as shown in Appendix A, when the material of the sphere and that of the surrounding medium have the same refractive index these scattered fields vanish identically.
We are ultimately interested in the total power irradiated by the particle into a unit solid angle about a given direction. To obtain it, we first evaluate the instantaneous Poynting vector from the outgoing fields (, ), under the far-field approximation; then a time average and a sum over all possible orientations of the dipole are performed. This is the power emitted per unit solid angle around a given direction, after the radiation emitted by the volume element escapes the particle. As derived in detail in Appendix A, if the dipole distribution is isotropic the above quantity turns out to be given by
(18)
The factor contains, in a sense, all the effects of the dust grain on the single PL photon emitted inside it, as indeed it modulates the isotropic power emitted inside the particle, affecting the angular and spectral distribution of the emerging PL. Equations (10) through (16) can be combined with Eq. (18) above to yield:
(19)
To simplify the subsequent formalism, we henceforth consider monochromatic excitation at angular frequency . This implies no loss of generality, since the general case can still be obtained by superposition. Eq. (19) is thus simplified to:
(20)
Integrating the previous equation in over the volume of the particle we obtain the total PL power per unit frequency and unit solid angle in a given direction:
(21)
following absorption at frequency .
In light scattering theory the extinction properties of small particles are usually expressed in terms of the extinction cross-section , which is the sum of the absorption cross-section and the scattering cross-section (Bohren & Huffman 1998). To follow this convention and to clearly separate the dependence of on the position inside the particle, we express the source function in Eq. (1) as the irradiance , times :
(22)
where can be seen as the contribution by the volume element to the total absorption cross-section of the particle. Therefore, Eq. (21) reduces to:
(23)
The ratio of to is a quantity with dimensions of area per unit solid angle, per unit frequency which we define as the differential cross-section for PL:
(24)
Physically it expresses the angular distribution of the light emitted by the particle: the amount of light (per unit incident irradiance) emitted at frequency by the particle, into a unit solid angle about a given direction. This definition is similar to the one of the differential scattering cross section , commonly used in light scattering theory (see e.g., Bohren & Huffman 1998). We thus write:
(25)
The treatment we presented so far is very general, and can be applied, in principle, to any case for which the electric and magnetic fields induced inside the particle by an incident light beam can be calculated. In particular we can
1.
apply the model to a known experimental configuration, in which , , and can all be measured or computed, in order to derive ;
2.
apply the model to the (astrophysically relevant) small dust particle we are interested in, for which we know from the first step, we can compute and and, assuming that we know , obtain .
In equations, the first step can be written as
(26)
while the second step can be written as
(27)
All of the small particle effects, including the specific size of the particle, their angular dependence and their dependence on the complex refractive index of the material are completely contained in the factor multiplying on the right hand side of the above equation. Hence these two steps, taken together, provide a general recipe to extrapolate laboratory PL results, obtained from bulk samples, to small dust particles, under the assumption that PL excitation is local and the resulting emission isotropic, which was the aim of the present work.
The above Eq. (27), derived for monochromatic incident light and a specific dust particle shape and size, can be straightforwardly generalised for a distribution n(a) of particles and non-monochromatic incident light. In this case one simply gets
(28)
#### 2.3 Local PL from a homogeneous sphere
To provide a simple, practical "proof of concept'' implementation of our model, while still able to yield some useful physical insight in the study of PL from interstellar dust grains, we consider a spherical, homogeneous particle illuminated by an unpolarised, parallel light beam. This enables us to make use of the standard Lorenz-Mie theory to describe the absorption and to derive analytical results for the resulting PL. This simple case can also be easily adapted to model a realistic laboratory configuration, hence providing the foundation for both the first and the second steps outlined in the previous section.
To exploit the symmetry of the problem we expand all of the fields as series of vector spherical harmonics (VSHs), which can be shown to be orthogonal and complete for transverse waves (see e.g., Bohren & Huffman 1998). As usual, the expansion coefficients can be derived by imposing the continuity of tangential components of the fields at the boundary surface between the particle and the surrounding medium, and using the orthogonality properties of the VSHs. All of the relevant formulae obtained for this case are presented in Appendix A. We refer the reader interested in the details of the full analytical derivation to Malloci (2003). We expressed the angular dependence of both and into Eq. (33) with the help of the generalised spherical functions (GSFs) Pm,nl (Hovenier & Van der Mee 1983). As expected from the symmetry of the problem, we obtain
(29)
where the angular dependence on the angle between the direction of incoming light and the direction of the PL light (see Fig. 1) is expressed in a series of Legendre polynomials . The explicit expression for the coefficients can be derived with some algebraic labour as shown in Appendix A. They depend explicitly on the particle size and on the refractive index N1 and, through the latter, implicitly on and .
Figure 1: Representation of the chosen coordinate system.
In turn, according to the first step outlined in the previous section, we can express in terms of the experimentally measured PL yield :
(30)
where
(31)
is the total PL yield, i.e. the ratio between the measured PL power per unit solid angle and unit frequency in a given direction, as shown in the experimental configuration depicted in Fig. 2, and the power absorbed by the sample from the monochromatic incident exciting light beam. The term includes all the gory details of light propagation and self-absorption in the macroscopic laboratory sample, which are obviously dependent on , and the experimental configuration, explicitly represented in this case by the angle , as defined in Fig. 2. Of course, the function g will be different for different specific experimental configurations, possibly depend on different parameters, and will thus need to be calculated on a case by case basis. The derivation of for the specific case implemented here can be found in the third part of Appendix A; for more details we again refer the reader to Malloci (2003). Figure 3 shows its behaviour for two different values of and three different excitation frequencies.
Figure 2: Schematic description of the experimental setup to measure the PL yield from a bulk sample and hence obtain in the local approximation. The sample can be seen as a portion of a sphere of very large radius, represented by the dotted outline.
We can now combine Eqs. (29) and (30) to obtain
(32)
where the form factor (with dimensions of an area) is given by:
(33)
Equations (32) and (33) show two properties:
• is the product of the experimentally measured PL yield times the "form factor'' which wholly contains all the small particle effects, including their angular dependence and their dependence on the complex refractive index of the material;
• the angular dependence of is expressed analytically in terms of a simple expansion in Legendre polynomials .
Equation (32) provides the promised link between the experimental measurement of PL, as performed on a macroscopic sample, and the expected PL spectrum for the same material "ground'' into small spheres. The computation of the expansion coefficients involves a single numerical integration in the radial dimension of the dust particle. For any practical purposes, this series can be truncated at a finite, relatively small number of terms (Bohren & Huffman 1998).
Figure 3: The function computed in the local approximation for the experimental setup depicted in Fig. 2, for three fixed incident wavelengths and two different values of the detector angle .
#### 2.4 Secondary PL
Figure 4: Sample or1: incident wavelength 0.411 m, sphere radius 0.05 m (, close to the Rayleigh limit).
In the practical implementation we presented here, we made the simplifying approximation to only consider primary PL, i.e. the emission of photons following the absorption of one photon from the incoming external field. We solved the well known problem of the absorption of light from an incident unpolarised wave and expressed the source function as proportional to the divergence of the resulting Poynting vector inside the particle. However, should also include a contribution from self-absorbed photons, as computed from the divergence of the Poynting vector stemming from the PL itself. This contribution to gives rise to secondary PL, i.e. the emission of a photon following the self-absorption of a previous PL photon. This does not formally affect the derivation above at all, except for one point: one should consider the contribution of self-absorbed photons to the source function, which is obtained from the divergence of the Poynting vector associated to the internal field (, ), of Eq. (17). As a consequence of this inclusion, PL at any given wavelength will be related to the PL and optical properties of the material at all smaller wavelengths, making practical calculations much more difficult. A perturbative approach is always possible, in which one considers, as a first approximation, just the contribution to the source function from the incident exciting field, computes the resulting PL and the resulting contribution of self-absorption to the source function, and repeats the calculations in a self-consistent way until the desired accuracy is reached. However, each subsequent correction will be of the order of the PL yield times the previous one; therefore, for any reasonable PL yield, the zero order approximation to completely neglect secondary PL will already be a good one, and any corrections beyond the first order will be extremely small.
From a qualitative point of view, the effect of secondary PL is obviously in the direction of increasing the yield, this increase being necessarily in the form of lower energy photons, moving the peak of the spectrum slightly to the red.
### 3 Numerical results
We present here the numerical results of the application of our local PL model to the specific case of a spherical, homogeneous dust particle, considering only primary PL. We used the optical properties of processed organic refractory residues, in a form as expected in the diffuse interstellar medium, as given by Jenniskens (1993). In particular, we used the optical constants corresponding to two different organic refractory residues subjected to energetic processing, here labelled or1 and or2, the latter being the more heavily processed one. The outer medium was assumed to be the vacuum.
Figure 5: Sample or1: incident wavelength 0.220 m, sphere radius 0.35 m ().
Figure 6: Sample or1: incident wavelength 0.411 m, sphere radius 0.35 m ().
Figure 7: Sample or1: incident wavelength 0.514 m, sphere radius 0.35 m ().
Figure 8: Sample or1: incident wavelength 0.411 m, sphere radius 0.95 m ().
Figure 9: Sample or1: incident wavelength 0.411 m, sphere radius 2.05 m (, approaching geometric optics limit).
Figures 4 through 11 display a sample of the results, relative to excitation with different monochromatic exciting wavelengths, namely 220 nm, 411 nm and 514 nm; the sphere radius is taken to be in the range 50-2050 nm and two sets of complex refractive indices were considered, corresponding to organic refractory residues obtained by UV irradiation of laboratory analogues of interstellar ices at different doses of absorbed energy. While we sampled a much larger grid of the parameter space, these results suffice to clearly demonstrate the main effects of self-absorption on the expected PL, and the impact on them of each parameter. They are illustrated with reference to the geometric configuration depicted in Fig. 1. The direction of propagation of the incident light defines the z axis, the forward direction. The angle expressing the position of the detector in the laboratory setup assumed (see Fig. 2) is taken to be . Each figure shows at the left the distribution of the absorbed energy inside the sphere, expressed in terms of the absorption cross section per unit volume, evaluated through Eq. (A.1), normalised to the total , computed using Eq. (A.2). The contour plots show the distribution of the locally absorbed energy in a plane containing the z-axis (cf. Fig. 1). Given the symmetry of the problem, if the exciting light is unpolarised the absorption pattern is the same in any such plane.
The box at the right represents the form factor , as a function of the angle between the incident beam and the direction of observation and of the emission wavelength . Therefore, these three-dimensional plots offer an explicit visual representation of the impact of particle size effects in modulating the spectral distribution of the PL observed in laboratory experiments. The form factor is expressed in gigabarns (1 Gb = 10-15 cm2).
### 4 Discussion
The distribution of the locally absorbed energy inside a particle is well known to be a complicated function of the position (Dusel et al. 1976; Kerker 1973). Our contour plots of the absorbed energy distribution correspond to increasing particle radii. As expected, featureless absorption is observed for very small size parameters (defined as ), while for increasing a, the absorption shows an increasingly structured pattern. For the optical properties of the sample or1 considered in this work there are cases in which the absorption is larger on the side opposite to the illuminated one. At large size parameters absorption is increasingly concentrated on the side where the particle is irradiated, as expected when approaching the geometrical optics limit.
This richly structured absorption pattern governs the behaviour of , which turns out to strongly depend on the emitted wavelength and on the angle . This vindicates our initial Ansatz that geometry effects are able to considerably modify the spectral distribution of the PL observed in laboratory experiments. In particular, the dependence is conspicuous for any particle sizes: in the Rayleigh limit ( and size parameters of about 0.8 and 0.6 for nm and 514 nm, respectively) changes by a factor of about 2-3 for between m and m, becoming even more marked for larger particles all the way up to the geometrical optics limit at 2.05 m (with size parameters of about 31.3 for nm and 25.1 for nm).
As to the angular dependence, the results show an increasing variation for growing size parameters; the astrophysical implications are discussed in a subsequent paper (Mulas et al. 2004).
When the incident wavelength is comparable to the particle dimensions, displays an oscillatory behaviour, which tends to be damped in the geometrical optics limit. This is not unexpected, as it is very similar to what one observes for extinction (see for example Fig. 4.6 on page 105 of Bohren & Huffman (1998), which shows an interference structure showing a series of alternating broad maxima and minima, with weaker and weaker oscillations around the expected asymptotic value for increasingly large size parameters).
Figure 10: Sample or2: incident wavelength 0.411 m, sphere radius 0.35 m ().
Figure 11: Sample or2: incident wavelength 0.411 m, sphere radius 0.95 m ().
It should be noted that the positions of peaks and valleys, for a chosen material, depend on particle size and on exciting wavelength; therefore, the small scale interference structure is likely to be washed out if one observes a broad distribution of particle radii and/or if their PL is excited by non-monochromatic light with a broad spectrum. In the most generic case of non-monochromatic light and a distribution n(a) of particle sizes we may rewrite Eq. (28) as
= (34) = =
which retains the explicitly analytical angular dependence of the emitted power , expanded in a (conveniently truncated) series in (cf. Appendix A for more details).
Indeed, in Mulas et al. (2004) we consider two standard dust particle size distributions of the kind first proposed by Mathis et al. (1977), in order to infer the impact of the present model on the actual astrophysical problem of the Extended Red Emission.
In this case, will retain only the overall large scale features of , while smoothing out the smaller oscillatory interference details (see Mulas et al. 2004). The overall behaviour will be governed by and large by the complex refractive index of the material.
### 5 Conclusions and forthcoming work
In spite of the extreme simplifications in its present numerical application, this model unambiguously demonstrates that self-absorption and geometric effects must be taken into account when comparing laboratory PL data taken on macroscopic samples with the observed PL from small particles, since they can have a rather important effect. Some perhaps misleading conclusions drawn from a direct comparison neglecting these effects may have to be regarded in a new light. In this respect, we present a more thorough analysis of the implications of this model for the specific case of the Extended Red Emission in a second paper (Mulas et al. 2004). We expect to soon obtain new laboratory measurements of the optical and PL properties of various IPHAC samples in collaboration with the research group of Laboratory Astrophysics at the Catania Astrophysical Observatory, to use them along with the present model for a meaningful, quantitative comparison with available ERE observations.
To reconcile the observed under-solar abundances of heavy elements and the observed extinction, the present, state-of-the-art interstellar dust models represent dust particles as complex aggregates (core-mantle and/or fluffy, porous grains), very different from our oversimplified homogeneous spheres. This makes it clear that the present application of our physical model of PL must be but a first step towards more realistic interstellar dust grain PL models. The first, and easiest unrealistic assumption which needs to be relaxed is homogeneity, in order to assess the impact of a multi-layer core-mantle structure on the outgoing PL while retaining spherical symmetry. To drop the latter is a more ambitious project, which will need a more sophisticated, and computationally expensive, approach, such as the T-matrix method, recently used to study the extinction of fluffy and porous dust grains (Saja et al. 2001; Iatì et al. 2001, 2003). This method is a natural match to our model, the major obstacle to its implementation being the need of much larger computing power resources to first find the coefficients of the expansions in VSHs and then numerically integrate the PL over the contributing volume of the particle.
On a parallel track, we want to extend our model to compute physical quantities related to other Stokes parameters besides I (as defined, e.g., in Bohren & Huffman 1998, Sect. 2.11.1), to be able to study any polarisation effects stemming from the inhomogeneous distribution of absorption and resulting PL within the dust grain.
Acknowledgements
G. Malloci acknowledges the financial support by INAF - Osservatorio Astronomico di Cagliari. The authors are thankful to Prof. C. V. M. Van der Mee for his help with generalised spherical functions theory, and Dr. C. Cecchi-Pestellini and Prof. F. Borghese for their suggestions and useful discussions.
## References
• Bohren, C. F., & Huffman, D. R. 1998, Absorption and scattering of light by small particles (New York: John Wiley & Sons Inc.)
• Chew, H., Kerker, M., & McNulty, P. J. 1976a, J. Opt. Soc. Am., 66, 440 NASA ADS
• Chew, H., McNulty, P. J., & Kerker, M. 1976b, Phys. Rev. A, 13, 396 NASA ADS
• De Rooij, W. A., & Van der Stap, C. C. A. 1984, A&A, 131, 237 NASA ADS
• Duley, W. W. 2001, ApJ, 553, 575 NASA ADS
• Dusel, P. W., Kerker, M., & Cooke, D. D. 1976, J. Opt. Soc. Am., 69, 55
• Greenberg, J. M., & Pirronello, V. 1991, Chemistry in space
• Hill, S. C., Saleheen, I., Barnes, M. D., Whitten, W. B., & Ramsey, J. M. 1996, Appl. Opt., 35, 6278 NASA ADS
• Hovenier, J., & Van der Mee, C. 1983, A&A, 128, 1 NASA ADS
• Iatì, M. A., Cecchi-Pestellini, C., Williams, D. A., et al. 2001, MNRAS, 322, 749 NASA ADS
• Jackson, J. D. 1998, Classical Electrodynamics, 3d ed. (New York: John Wiley & Sons Inc)
• Jena, P., Khanna, S. N., & Rao, B. K. N. 1992, Physics and Chemistry of Finite Systems: From Clusters to Crystals, Proc. of the NATO Advanced Research Workshop, Richmond, VA, USA, October 8-12, 1991 (Dordrecht: Kluwer Academic Publishers)
• Jenniskens, P. 1993, A&A, 274, 653 NASA ADS
• Kerker, M. 1973, Appl. Opt., 12, 1378
• Kerker, M., & Druger, S. 1979, Appl. Opt., 18, 1172 NASA ADS
• Kerker, M., McNulty, P. J., Sculley, M., Chew, H., & Cooke, D. 1978a, J. Opt. Soc. Am., 68, 1676 NASA ADS
• Kerker, M., McNulty, P. J., Sculley, M., Chew, H., & Cooke, D. D. 1978b, J. Opt. Soc. Am., 68(12), 1686
• Li, A., & Greenberg, J. M. 1997, A&A, 323, 566 NASA ADS
• Malloci, G. 2003, Ph.D. Thesis, Università degli Studi di Cagliari
• Mathis, J. S., Rumpl, W., & Nordsieck, K. H. 1977, ApJ, 217, 425 NASA ADS
• Mathis, J. S., & Whiffen, G. 1989, ApJ, 341, 808 NASA ADS
• Mulas, G., Malloci, G., & Benvenuti, P. 2004, A&A, 420, 921
• Pendleton, D. J., & Hill, S. C. 1997, Appl. Opt., 36, 8729 NASA ADS
• Robertson, J. 1996, Phys. Rev. B, 53, 16303
• Rusli, G., Amaratunga, A. J., & Robertson, J. 1996, Phys. Rev. B, 53, 16306
• Saija, R., Iatì, M. A., Giusto, A., et al. 2003, MNRAS, 341, 1239 NASA ADS
• Saja, R., Iatí, M. A., Borghese, F., et al. 2001, ApJ, 559, 993 NASA ADS
• Seahra, S. S., & Duley, W. W. 1999, ApJ, 529, 719
• Smith, T. L., & Witt, A. N. 2002, ApJ, 565, 304 NASA ADS
• Videen, G., Bickel, W. S., & Boyer, J. M. 1991, Phys. Rev. A., 44, 1358 NASA ADS
• Wang, D., Kerker, M., & Chew, W. 1980, Appl. Opt., 19, 2315 NASA ADS
### Appendix A: General formalism and basic concepts
This appendix provides the basic concepts of the theoretical background of this work as well as the derivation of the main relations used in the application of our model of PL by small particles. We will discuss separately the two mechanisms of excitation and subsequent emission. The distribution of the absorbed energy inside the particle is presented in Sect. (A.1), while the treatment of the PL emission is given in Sect. (A.2). Section (A.3) provides the derivation of for the case implemented in the present work. We followed the same units and notations adopted in Bohren & Huffman's book (1998), henceforth B&H. Details of the reductions to obtain the formulae here presented are given in Malloci (2003).
#### A.1 Absorption
We consider a spherical, isotropic and homogeneous dust particle of radius a. As depicted in Fig. 1, we choose the centre of the particle as the origin of a rectangular coordinate system (x, y, z). We are interested in the distribution of the absorbed energy inside the particle under the hypothesis of incident natural light. Unpolarised light is described as the incoherent superposition of the two mutually orthogonal plane waves at frequency , both of which are expanded in an infinite sum of vector spherical harmonics (henceforth VSHs). This expansion also dictates the form of the scattered and internal fields (B&H).
Assuming the magnetic permeability of the particle to be the same as that of the surrounding medium, the resulting expression for the contribution of the volume element to is given by:
= (A.1)
where m1 = k1 / k = N1 / N is the relative refractive index, , and N1, N being the wave numbers and the refractive indices of particle and medium, respectively. n and m are positive integers and the prime, as usual, denotes differentiation with respect to the argument of the function, while the asterisk represents complex conjugation. The argument of the Riccati-Bessel functions is the dimensionless variable . The angular functions and are expressed in terms of the associated Legendre functions of the first kind Pn1. The expansion coefficients cn and dn are completely determined by the size parameter x, related to the particle radius a by (B&H).
Equation (A.1) is independent of the azimuthal angle , as expected on physical grounds for unpolarised incident light. The integration over the volume of the sphere leads to an expression of in terms of cn and dn:
(A.2)
which provides the same value for as obtained by the standard Lorenz-Mie theory (B&H).
#### A.2 Emission
We consider a dipole moment localised at position inside the sphere and oscillating at frequency :
where we introduced the Euler angles and specifying the arbitrary orientation of the dipole. With no loss of generality, we will restrict the following derivation to belonging to the positive z axis, since any other position can be subsequently obtained with a simple rotation of the reference frame. Suppressing the time dependence , the electromagnetic dipole fields occurring in Eq. (17) are given by:
(A.3)
(A.4)
where , and k1 and are, respectively, the wave number and the permittivity of the medium composing the particle. We use the same notation as Jackson (1998), modified so that Eqs. (A.3) and (A.4) are valid for arbitrary dielectric media. To exploit the symmetry of the problem, it is advantageous to express the electromagnetic field as a series in vector spherical harmonics (VSHs).
After a lengthy, although straightforward, procedure, the expansion of and in VSHs results in:
(A.5)
(A.6)
where with and we further defined:
(A.7)
Equations (A.5) and (A.6) are valid only for the radial domain ; this is sufficient for us, since we will need to evaluate the fields at the surface of the sphere in order to impose the boundary conditions. The electromagnetic fields are expected to show a singularity in the position where the dipole is located, i.e. r=r0 and ; this singularity does not emerge from any single term in the expansion, but instead it is the series that is divergent in this case.
The expansion above, along with the orthogonality of the VSHs, the boundary conditions and the far field asymptotic behaviour expected, set the form of the complete solution, i.e. the expansion of both the scattered field ( ) and the outgoing field ( ). On physical grounds, the finiteness of the fields at the origin requires only the well behaved spherical Bessel functions of the first kind jn(k1 r) to be present in the expansion inside the sphere, whereas the spherical Bessel functions of the second kind yn(k1 r), which diverge for r=0, do not occur. The asymptotic behaviour of the spherical Hankel functions (B&H) leads to the choice of hn(1)(k r) = jn(k r) + i yn(k r) in the region outside the sphere, since the electromagnetic field leaving the particle must be an outgoing wave. Thus, appending as usual the superscript (1) which specifies that jn is the function containing the radial dependence in the VSHs:
(A.8)
(A.9)
where vn and wn are the coefficients to be determined, and Hn and En are the same as in Eq. (A.7). Likewise, introducing an and bn, another pair of coefficients to be determined, and appending the superscript (3) to VSHs whose radial dependence is expressed by hn(1), the fields outside the particle are
(A.10)
(A.11)
where and are the expressions corresponding to (A.7) for the region outside the particle:
(A.12)
The four unknown coefficients an, bn, vn and wn are obtained imposing the usual boundary conditions across the sphere surface. After some manipulation, making use of the orthogonality of VSHs one obtains:
with . The denominators of vn and an are identical, as are those of wn and bn. As in the previous section, making the assumption we have:
(A.13)
(A.14)
When the particle and the surrounding medium have the same optical properties, the transmitted fields must coincide with the dipole fields, while the scattered fields must vanish. Indeed, in the limit an an bn approach unity, while vn and wn vanish.
We are eventually interested in the total power irradiated by the particle at very large distances from it, into a small solid angle about a given direction. In the far-field region the radial components Er and Hr of the external fields become negligible with respect to the angular ones, which are related by:
This follows from the asymptotic behaviour of the Riccati-Bessel function and its derivative (B&H). Thus, the time-averaged Poynting vector in the far- field is given by:
The expression of and is given in Malloci (2003).
So far we have been considering a single, discrete dipole, oscillating at a given frequency. We can move to a continuous distribution of dipoles with the obvious substitutions
(A.15)
and
(A.16)
The incoherent sum over all possible orientations of the dipoles yields
(A.17)
which can be evaluated observing that:
The total power radiated per unit solid angle at frequency by this isotropic collection of dipoles centred in is hence given by:
If the medium surrounding the particle is non-absorbing (not necessarily the vacuum) this quantity at large distances must be independent of r; it only depends on the position of the dipole inside the grain and the direction at which emission is observed.
With the help of Eq. (A.12) defining , we obtain, after some manipulation, Eq. (18) of Sect. 2.2:
The expression of for a homogeneous, isotropic sphere is given by:
= (A.18)
where aj and bj are given by Eq. (A.13). If it is easily found that
Equation (A.18) is obviously -independent, since we restricted to be along the z-axis and summed over all possible orientations. The same Eq. (A.18) turns out to also provide the expression of for an arbitrary , with just the simple substitution
in the arguments of the angular functions , and Pj1 into Eq. (A.18).
We thus obtained all of the ingredients occurring in Eq. (25) in the specific case of a homogeneous, isotropic sphere. The angular integrations can be evaluated analytically expanding the angular functions contained in and as linear combinations of generalised spherical functions (GSFs) Pm,nl (Hovenier & Van der Mee 1983), which yields
(A.19)
and
(A.20)
where the coefficients of the expansions are:
= =
and the meaning of the newly introduced terms is:
the last two terms being Wigner 3j-symbols, well known in the quantum theory of angular momentum (see e. g. De Rooij & Van der Stap 1984). The infinite sums in the above equations can be truncated, for practical purposes, to a finite number of terms sufficient to make the truncation error negligible (B&H). We will call N and P the number of terms required for the expansions in and , respectively.
Substituting Eqs. (A.19) and (A.20) in Eq. (25), the angular integration can be evaluated making use of the properties of the GSFs yielding, after a hearty amount of algebra which we spare the reader:
(A.21)
with
(A.22)
Details of the numerical methods used to evaluate each term of Eq. (A.21) are given in Malloci (2003).
#### The evaluation of p
We need to practically evaluate Eq. (26), to obtain from laboratory measurements of the PL yield of a bulk sample in a specific experimental setup. We consider the experimental configuration of Fig. 2, in which a bulk sample is illuminated by a collimated laser beam of frequency travelling in the positive z direction, and the PL emission at frequency is observed at angle with respect to the exciting radiation. We may rewrite Eq. (26) multiplying both the numerator and the denominator by the irradiance of the incident beam and the total power absorbed by the sample, to obtain:
(A.23)
where we used Eqs. (24) and (31) to introduce the experimental differential PL yield.
To be able to derive from the experimental measurement of we must now evaluate the fraction
(A.24)
This must generally involve a detailed analysis of the propagation of the light impinging on the sample, its absorption and the propagation of the resulting PL up to the collecting optics of the detector, which in turn will require the expansion of the electromagnetic fields using a basis appropriate to the geometry of the sample and to its symmetry (i.e. usually plane waves). However, if the material considered is sufficiently absorbing, it is possible to reuse some of the results obtained in the previous sections for a sphere.
In a typical experimental configuration, the collimating optics will have a diameter of the order of 2 cm, a focal length of the order of 15 cm, and will illuminate a spot of less than 10 m on the surface of the sample, when focused. The parallel laser beam before focusing has a diameter of the order of 2 mm. This beam penetrates into the sample for a depth of the order of a few times , being the absorption coefficient at incident wavelength, which translates into less than 10 m for the organic refractory residues considered for the examples in this work.
Given its small aperture, the relatively large focal length and the short path it travels inside the sample, the collimated laser beam can be considered as a portion of a parallel beam, confined into a cylinder: the diameter of the beam will change by less than one part in 104 due to its divergence before being completely absorbed. Hence, we represent this experimental configuration as a plane parallel slab in which a cylinder of diameter 10 m is illuminated by a normally incident portion of plane wave, which is exponentially attenuated as it travels inwards, so that only about 10 m of its length need to be considered. In the local PL limit, only this same cylinder will produce luminescence photons. It is clear that as long as only such a small portion of the sample is participating in the PL, we can consider this portion to be a part of a sphere, provided it is large enough for the curvature of the surface to be negligible over lengths of the order of the absorption length. In this geometry, the illuminated part of the sample (imagined as a sphere centred in the origin of the reference system and illuminated by a parallel beam along the z-axis) can be approximated by a small portion of a narrow right circular cone of aperture centred around the beam axis and having its vertex in the centre of the sphere. This frustum of cone, for sphere radius much larger than the cylinder section, becomes undistinguishable from the actually illuminated cylinder we are considering. If the circular spot illuminated on the sample surface has area , the aperture of the frustum of cone will be given by the relation . We remark that, for the sizes, wavelengths and optical properties we are considering, a sphere of radius of the order of 1 mm is already quite large enough. Since the outgoing light is collected at distances of the order of 15 cm, we can still quite safely consider it to be in the far field limit. The power absorbed by a unit volume within the illuminated cylinder (or frustum of cone, defined by ), can be thus written as
(A.25)
a being the radius of our "large'' sphere, r0 the radial coordinate, the absorption coefficient at incident wavelength and the transmittance for normal incidence at a plane boundary. is assumed to be identically zero outside of the illuminated cylinder (which we will approximate by the frustum of cone in the integration). In essence, we just use geometric optics for the absorption, which is fully justified for the experimental conditions described above.
The integral at the denominator of the right hand side of Eq. (A.24) can thus be written as
(A.26)
the approximate equality being due to approximating the illuminated cylinder (the integration domain) with the frustum of cone.
The total power absorbed by the sample is
(A.27)
the approximate equality sign again due to the approximation of the cylinder with the frustum of cone. This equation can be used to quantify the accuracy of this approximation, since the integral on the right hand side can be evaluated analytically to yield
which rapidly approaches its limiting value of for increasing a.
With the help of the previous equations, we can write
(A.28)
For every given couple of frequencies and , we numerically evaluated the above equation for increasing sphere radii a, until convergence was reached with an accuracy of three significant figures. This was achieved, in all cases considered in the example application presented in this work, for m. We remark that neither nor are present in the above equation, as would be expected on physical grounds.
Substituting Eqs. (A.24) and (31) into Eq. (A.23) yields the desired expression for in terms of measurable and calculable quantities:
where we use Eq. (A.28) to evaluate . |
LabelUnits(axis)
Use axis unit descriptions in a Plot?
Description:
This attribute controls the appearance of an annotated coordinate grid (drawn with the AST_GRID routine) by determining whether the descriptive labels drawn for each axis of a Plot should include a description of the units being used on the axis. It takes a separate value for each physical axis of a Plot so that, for instance, the setting " LabelUnits(2)=1" specifies that a unit description should be included in the label for the second axis.
If the LabelUnits value of a Plot axis is non-zero, a unit description will be included in the descriptive label for that axis, otherwise it will be omitted. The default behaviour is to include a unit description unless the current Frame of the Plot is a SkyFrame representing equatorial, ecliptic, galactic or supergalactic coordinates, in which case it is omitted.
Type:
Integer (boolean).
Applicability
Plot
All Plots have this attribute.
Notes:
• The text used for the unit description is obtained from the Plot’ s Unit(axis) attribute.
• If no axis is specified, (e.g. " LabelUnits" instead of " LabelUnits(2)" ), then a " set" or " clear" operation will affect the attribute value of all the Plot axes, while a " get" or " test" operation will use just the LabelUnits(1) value.
• If the current Frame of the Plot is not a SkyFrame, but includes axes which were extracted from a SkyFrame, then the default behaviour is to include a unit description only for those axes which were not extracted from a SkyFrame. |
# Documentation generator for q (.qd)
## Overview
qdoc is an API documentation generator tool for q/kdb+. The qdoc tool scans your source code for comment blocks right before the code blocks (functions, data, etc.) and generates Markdown documentation for you.
The documentation can be generated from either the q files on disk or an in-memory table of items.
## Basic usage
To simply document all the source code for a project, invoke the documentation generator with the path to the project directory. It will recursively go and find *.q and README.md files and document them.
.qd.doc[::] :/path/to/project
You can also selectively document q file(s) and/or README(s)
.qd.doc[::] :/path/to/qfile1:/path/to/qfile2:/path/to/README
Render Markdown (using mkdocs)
.qd.out.mkdocs.write[::] .qd.doc[::] :/path/to/project
## Writing documentation
qdoc's purpose is to document the API of your q project/library. Add the documentation comments directly to your source code, generally placed immediately before the code being documented. Each qdoc comment must start with / or // followed by one of the qdoc tags. Each tag must be prepended with an @ symbol.
### Function/data
Use the qdoc tags to document the behavior of a function or a data (variable) item.
Here is a code snippet from an example q file that documents:
1. A function item .foo.add that takes two arguments x and y of type long and returns the sum as an integer
2. A data item .foo.PI that holds the value of pi
system "d .foo";
// @kind function
// @fileoverview Function returns the sum of two numbers as an integer
// @param x {long} First parameter
// @param y {long} Second parameter
// @return {int} Sum of the parameters
"i"$x + y }; // @kind data // @fileoverview Static value of pi PI: 3.14159; system "d ."; ### README qdoc supports documenting the behavior of a namespace or the whole project. Use Markdown files called */README.md to add namespace or project level description. #### Namespace description To add a description for a namespace you will need to use the following qdoc tags: • kind: Use the readme identifier to mark the beginning of the readme comment block for a namespace • name: Name of the namespace being documented followed by /README.md. E.g.: .foo/README.md • category: (Optional) By default category is extracted from the namespace. Use this to specify a non default category name. • subcategory: (Optional) By default subcategory is extracted from the namespace. Use this to specify a non default subcategory name. • end: Marks the end of a qdoc comment block. The documentation for the namespace, along with the tags specified above, can live in a couple of places: • Embedded: You can embed readme(s) within a q file by classifying the comment block as readme using the kind tag. Multiple readmes for different namespaces can be included within a single q file by starting and ending each readme comment block with kind and end tags respectively. Since the embedded readmes are part of your q file you will need to comment out each line. These comments will be stripped out for you by the qdoc generator. Here is an example q file that has a couple of embedded readmes for different namespaces followed by arbitrary code: system "d .foo" // @kind readme // @name .foo/README.md // @category Foo // # Foo // This a readme for the .foo namespace. // It contains the following items: // - .foo.ADD // - .foo.PI // // end // @kind readme // @name .foo.bar/README.md // @category Foo // @subcategory Bar // # Foo // This a readme for the .foo.bar namespace. // // end // Followed by code ... 1+1; system "d ." Note: This document is generated by qdoc generator itself, so we can not use the '@end' tag in our examples since that will prematurely end the readme. All the lines with '// end' should be '// @end' instead • Standalone: You can also have a Markdown file per namespace containing the documentation. Just ensure that the name of the Markdown file ends with REAMDE.md so that the qdoc generator can locate the files. Even with the standalone READMEs, you do need to add the qdoc tags specified above. However, since these are raw Markdown files and would not contain any q code, you do not need to comment out every single line of documentation. Here is an example of foo/README.md file containing documentation for the .foo namespace // @kind readme // @name .foo/README.md // @category foo // # Foo This a readme for the .foo namespace. It contains the following items - .foo.ADD - .foo.PI #### Project description To add a project level description include a README.md file with the project description at the root of the project. The project level readmes do not need any qdoc comments. Contents of this readme file will appear on the index page of the project documentation rendered using the mkdocs plugin. ### Project organization project/ ├── foo.q ├── foo │ └── README.md ├── bar.q ├── README.md └── sub-project ├── sub.bar.q └── sub.foo.q ## QDoc output The qdoc generator converts the qdoc comments blocks in the source code to Markdown documentation. The core .qd.doc function takes two arguments, settings and src, and returns the generated Markdown content. It also writes the Markdown files to disk if the write flag is set in the settings dictionary. ### On disk By default, the generated Markdown files are written to disk at ${pwd}/out. This can be overwritten by using the out attribute in settings dictionary.
The out directory contains md subdirectory that holds a Markdown file per category/subcategory depending on the value of the group attribute in the settings. The files are named after the category/subcategory and each file contains the associated items (functions, data, readmes). By default, the category and subcategory information is extracted from the item name. The top-level namespace becomes the category and sub-namespace (if any) becomes the subcategory. Here are a few examples:
item name category subcategory
.foo.bar foo -
.foo.bar.baz foo bar
.foo.bar.baz.bat foo bar.baz
foo.bar Global -
foo.bar.baz Global -
This behavior can be modified and users can categorize their items on a per-namespace and/or per-item basis. Here is the order of precedence for determining the category and subcategory for an item:
• extracted from the item name
• defined on the namespace level using the default-category and default-subcategory tags in the onLoad function
• defined on per item basis using the category and subcategory tags
See qdoc tags for more information on the usage of these tags.
### In memory
The contents of the generated Markdown files are also returned by .qd.doc function along with the errors encountered during the generation.
See .qd.doc for detailed information on the returned data structure.
## QDoc tags
qdoc uses case insensitive tags to document the behavior of an item. Unsupported tags are ignored by the qdoc generator. The qdoc tags grammar is divided into two major categories:
• General Tags: These tags are used to document the behavior of a particular piece of code. All these tags are supposed to go immediately before the code being documented. Here is the list of all the general tags:
tags description multiline
@author Defines the author of the item Yes
@category Categorize an item in the generated Markdown files No
@deprecated Marks the item as deprecated Yes
@desc Describes the individual properties table/dictionary Yes
@doctest Describes executable example blocks Yes
@end Marks the end of a readme comment block N/A
@example Documents usage example(s) for the item Yes
@fileoverview Describes the behavior of the item Yes
@kind Defines the kind (type) of the item No
@name Defines the name of the item No
@param Describes the parameter of the function Yes
@private Marks the item as private N/A
@returns Describes the return value of the function Yes
@see Creates a link to another item in the documentation No
@subcategory Categorizes the item in the generated Markdown files No
@throws Documents an error that the function might throw Yes
@todo Documents the tasks that still need to be done Yes
• Special Tags: qdoc supports special tags that define/modifies the properties of code blocks on a namespace basis. These tags go inside the function body of a special function named onLoad (case sensitive). You can define an onLoad function for each namespace. E.g. .foo.onLoad, .foo.bar.onLoad. Here is the list of all the special tags:
tags description multiline
@default-category Defines the default category for all items in a namespace No
@default-subcategory Defines the default subcategory for all items in a namespace No
@typedef Describes the definition of a type in the onLoad function Yes
### Tag mapping
qdoc supports mapping the behavior of a default qdoc tag to a user-defined tag name. This allows users to make use of qdoc generator on an existing project (with user-defined tag names) without having to update the tags.
Example: Map the user-defined overview and arg tag names onto qdoc's fileoverview and param tags respectively
.qd.doc[enlist[tagmap]!enlist overviewarg!fileoverviewparam] :/path/to/project
See .qd.doc for more details.
Note: The behavior of mapping tags of incompatible syntaxes is undefined
### Author
Syntax
@author AUTHOR_NAME
Overview
The author tag lets you specify an author name for the item
Examples
Documents Alex as the author of the function
// @author Alex
{[]
}
### Category
Syntax
@category CATEGORY_NAME
Overview
The category tag lets you specify a category for the item. This category along with the subcategory is used to organize items in the generated Markdown files. Items with the same category are grouped in the same Markdown file. By default, the category is extracted from the item name.
See qdoc output for information on default category
Examples
Set the category of the function to Foo
// @category Foo
{[]
}
### Default-category
Syntax
@default-category NAMESPACE CATEGORY_NAME
Overview
The default-category tag lets you specify a category for all the items in a namespace. Since it is one of the special qdoc tags, it should only be used within a function named onLoad.
See the special tags section under qdoc tags for more information on the onLoad function.
Examples
Set the category of all the items under the .foo namespace to Foo
.foo.onLoad: {[]
// @default-category .foo Foo
}
### Default-subcategory
Syntax
@default-subcategory NAMESPACE SUBCATEGORY_NAME
Overview
The default-subcategory tag lets you specify a subcategory for all the items in a namespace. Since it is one of the special qdoc tags, it should only be used within a function named onLoad.
See the special tags section under qdoc tags for more information on the onLoad function.
Examples
Set the subcategory of all the items under the .foo.bar namespace to Bar
.foo.bar.onLoad: {[]
// @default-subcategory .foo.bar Bar
}
### Deprecated
Syntax
@deprecated [ALTERNATIVE]
Overview
The deprecated tag marks an item as deprecated. You can use the deprecated tag by itself or include some text that describes more about the deprecation and provides an alternative.
Examples
Documents the function as deprecated
// @deprecated Use bar
{[]
}
### Desc
Syntax
@desc PARAM_NAME.PROPERTY_NAME PROPERTY_DESCRIPTION
Overview
The desc tag is used in conjunction with the param, return and typedef tags to add descriptions for table column or dictionary keys.
A dictionary key/table column can be marked optional by wrapping the name in square brackets.
Examples
Documents the description for the keys of a dictionary parameter. Note that key z is marked as optional here
// @param foo {dict (x: int; y: char; z: long)} Parameter description
// @desc foo.x Description of key x
// @desc foo.y Description of key y
// @desc [foo.z] Description of key z
{[foo]
}
Documents the description for the columns of a table
// @returns foo {table (x: int; y: char)} Return description
// @desc foo.x Description of column x
// @desc foo.y Description of column y
{[]
}
Documents the description for a data structure defined using the typedef tag
.foo.onLoad: {[]
// @typedef myType {dict (x: table (y: char); y: dict (x: char))}
// @desc myType.x Description for key x
// @desc myType.x.y Description for table column y in key x
// @desc myType.y Description for key y
// @desc myType.y.x Description for table column x in key y
}
### Doctest
Syntax
@doctest [TITLE]
code
code
Overview
The doctest tag is used to define an executable example that can be conditionally run during qdoc generation time for testing for API doc accuracy. The last line of the code block should return a boolean
Examples
// @doctest Testing the functionality of add function
// x:1;
// y:2;
// 3 ~ +[x;y]
### End
Syntax
@end
Overview
The end tag marks the end of the readme comment block. This tag also allows you to embed multiple readmes in a single q file without any ambiguity.
Examples
Marks the end of a readme comment block. Comments after the end tag are not part of the readme.
// @kind readme
// .
// .
// end
// a part of the readme
// .
// .
// end
Note: This document is generated by qdoc generator itself, so we can not use the '@end' tag in our examples since that will prematurely end the readme.
All the lines with '// end' should be '// @end' instead
### Example
Syntax
@example [TITLE]
code
code
Overview
The example tag assumes a multi-line comment (until the next tag or the beginning of the function) that shows how the item is intended to be used.
Examples should be titled to convey the purpose of the example. For a longer description, use the paragraph Markdown token > inside of the example block. The output of an example should be shown below the q expression line and should be prefixed with /=>
Examples
Document an example titled 'bar' for a function.
// @example Bar
// // This is a long description of how the example should work.
// // Long descriptions can be multiple lines
//
// foo AAPLGOOGMSFTAMZN
// /=> "AAPL"
// /=> "GOOG"
// /=> "MSFT"
// /=> "AMZN"
{[]
: .axq.asString x
}
### Fileoverview
Syntax
@fileOverview ITEM_DESCRIPTION
Overview
The fileOverview tag takes the multiline comments (until the next tag or the beginning of the function) as the overview for the item. A file overview block should describe the external behavior of an individual item. The file overview should not describe the inner workings of an item.
Examples
Document an overview for a foo function.
// @fileOverview This is the description
// for the function called 'foo'
{[]
}
### Kind
Syntax
@kind ITEM_TYPE
Overview
The kind tag specifies the type of the item. Valid item types are: function, data and readme. The kind tag also marks the beginning of the readme comment block, therefore it should be the first tag when used in a readme comment block.
Examples
Marks the item as a function
// @kind function
{[]
}
### Name
Syntax
@name ITEM_NAME
Overview
The name tag is used to specify the name of the item being documented. There is no need to use the name tag when documenting function/data since these items already have a name associated with them in code. README(s) however do not have a name associated with them, therefore, it is required to use this tag to specify the name for the readme.
The name is also used to categorize an item in the Markdown files. See category and subcategory for more information on how category and subcategory is extracted from an item name.
Examples
The value .foo/README.md for the name tag here specifies that this readme is associated with the .foo namespace
// @kind readme
### Param
Syntax
@param PARAM_NAME {PARAM_TYPE} [PARAM_DESCRIPTION]
Overview
The param tag lets you document the parameter for a function you are documenting. It requires you to specify the name of the parameter at the least. You can include the parameter's type enclosed in curly brackets followed a description. Both type and description fields are optional, but it is highly recommended to document them. For information on documenting parameter types, see qdoc types
Examples
Documents name, type, and description of function parameters
// @param bar {string} This is the description of parameter bar
// @param baz {int} Description of parameter baz
{[bar; baz]
}
Documents compound parameter types for a function
// @param bar {(symbol;long; byte)} This is the description of parameter bar
// @param baz {table (a:long; b:char)} Description of parameter baz
{[bar; baz]
}
### Private
Syntax
@private
Overview
The private tag marks an item as private, or not meant for general/public use. Private items are not documented by default, but the behavior can be changed by setting the docprivate flag in the settings.
As a general convention, a module should never call out to another module's private items.
Examples
Documents a function as a private function
// @private
{[]
}
### Returns
Syntax
@returns [RETURN_NAME] {RETURN_TYPE} [RETURN_DESCRIPTION]
Overview
The returns lets you document the return value for a function. Optionally, you can specify a name and a description for the return value.
Examples
Documents the return value of a function
// @returns {table (a: int; b: symbol)} Returns a table
{[]
: ([] a: til 10; b: 10?3)
}
### See
Syntax
@see ITEM_PATH [ITEM_PATH ...]
Overview
The see tag allows you to refer to another item that may be related to the one being documented. You can use the see tag more than once in a single item. Multiple items can also be referenced using a single see tag followed by a list of name-paths separated by spaces.
You need to provide a name-path of the other item. You can either use an absolute path (fully qualified name) or relative path (name) of the item being referred to. The relative path should be relative to the current module not the top level module.
Examples
Adds link to functions .bat.quaz and .foo.baz in function .foo.bar. It uses a fully qualified name to reference function .bat.quaz since it belongs to a different module than .foo.bar. The function .foo.baz however, can be referenced with a relative name baz since it is in the same module.
Function name: .foo.bar
// @see .bat.quaz
// @see baz
{[]
}
// @see .bat.quaz baz
{[]
}
### Subcategory
Syntax
@subcategory SUBCATEGORY_NAME
Overview
The subcategory tag lets you specify a subcategory for the item. This subcategory along with the category is used to organize items in the generated Markdown files. Items with the same subcategory are grouped in the same Markdown file. By default, the subcategory is extracted from the item name.
See qdoc output for information on default subcategory
Examples
Sets the subcategory to bar
// @subcategory bar
{[]
}
### Throws
Syntax
@throws {THROW_TYPE} [THROW_DESCRIPTION]
Overview
The throws tag allows you to document an error that the function might throw.
Examples
Documents that the function throws an "Invalid Argument Type" error
// @throws "Invalid Argument Type"
{[]
if[1b;
'"Invalid Argument Type"];
}
### Todo
Syntax
@todo TODO_DESCRIPTION
Overview
The todo tag allows you to document tasks that need to be completed. You can use the todo tag more than once in a single item.
Examples
Documents the tasks that need to be done
// @todo This task needs to be done
// @todo This task also needs to be completed
{[]
}
### Typedef
Syntax
Definition:
// @typedef ALIAS_NAME {ALIAS_TYPE} [TYPEDEF_DESCRIPTION]
Usage:
// @param PARAM_NAME {#ALIAS_NAME} [PARAM_DESCRIPTION]
Overview
The typedef tag allows you to describe an alias for your data structure and then use the alias as type everywhere instead of the data structure. You can optionally use the desc tag to define the keys/columns of the datastructure.
A private data structure can be created using typedef:private tag. Private data structures are not documented by default, but the behavior can be changed by setting the docprivate flag in the settings.
Since it is one of the special qdoc tags, it is supposed to be used in the onLoad function body only. See the special tags section under qdoc tags for more information on the onLoad function.
qdoc typedefs are categorized into two types:
1. Basic typedefs: Typedefs that accepts no arguments
2. Advanced typedefs: Typedefs that accepts argument(s). Wrap the arguments to the advanced typedef using angle brackets (< and >) if they are nested.
Examples
Describes basic typedefs in the onLoad function
Typedef definition:
.foo.onLoad: {[]
// @typedef firstType {int} This is my first type.
// @typedef:private secondType {dict (a: int; b: char)} This is my second type.
// @desc secondType.a Description for key a
// @desc secondType.b Description for key b
}
Typedef usage: (any function in the .foo namespace)
// @param newParam1 {#firstType} This is new param 1.
// @param newParam2 {#secondType} This is new param 2.
.foo.bar: {[newParam1; newParam2]
}
QDoc Output:
| Name | Type | Description |
| --------- | --------- | --------------------------------------------- |
|newParam1 |int |This is new param 1. This is my first type. |
|newParam2 |dict |This is new param 2. This is my second type. |
|newParam2.a|int |Description for key a |
|newParam2.b|char |Description for key b |
Typedef definition:
.foo.onLoad: {[]
// @typedef thirdType a {table (a: #a; b: char)}
// @typedef fourthType a b {(#a; #b)}
}
Typedef usage: (any function under the .foo module)
// @param newParam3 {#thirdType long} This is my new param 3
// @param newParam4 {#fourthType int char[]} This is my new param 4
// @param newParam5 {#fourthType <#thirdType double> <#thirdType int>}
.foo.bar: {[newParam3; newParam4; newParam5]
}
QDoc Output:
| Name | Type | Description |
| --------- | ----------------- | ------------------------------------- |
|newParam3 |table | This is my new param 3 |
|newParam3.a|long | |
|newParam3.b|char | |
|newParam4 |$$int; char$$ | This is my new param 4 |
|newParam5 |$$table \(a: double; b: char$$; table $$a: int; b: char$$ ||
Describes typedefs with scope referencing
Typedef definition:
.foo.bar.onLoad: {[]
// @typedef myType {table (moduleName: symbol; typedef: string)}
// @typedef secondType {symbol} This is a new second type.
}
Typedef usage: (any function under the .foo namespace)
// @param newParam5 {#.foo.bar.myType} Fully qualified name naming.
// @param newParam6 {#bar.secondType} Relative naming.
.foo.bar: {[newParam5; newParam6]
}
QDoc Output:
| Name | Type | Description |
| ----------------------| ----- | ----------------------------------------- |
|newParam5 |table | Fully qualified name naming. |
|newParam5.moduleName |symbol | |
|newParam5.typedef |string | |
|newParam6 |symbol | Relative naming. This is a new second type|
## QDoc types
The following table outlines a brief overview of the supported qdoc types:
type input example documentation documentation example
atom 1 Atom type name long
anything (1i; 100; "a") Global typedef any #any
dict ab!(1; "x") Key value type specifier dict (a:long; b:char)
enum (ab)?(abba) Enumeration symbol $symbol function {"i"$x} Function param & return fn (long) -> int hsym :path/to/file Global typedef hsym #hsym option/or One type or the other Type 1 | Type 2 | Type N table | dict [] | dict string "Hello, World" Term string string table ([] a:1 2;b:"xy") Key value type specifier table (a:long; b:char) tuple (a; 0; 0x61) Tuple type specifier (symbol; long; byte) typedef Could be anything Typedef reference #myTypedef vector 1 2 3 4 5 ... Vector type and [] long[] To validate a qdoc type, you can pass the qdoc type to the function .qd.types.parse as string. The type has valid syntax as long as the function returns a parse tree and does not throw any error. .qd.types.parse "(symbol; long)" //=> option ,(tuple;((option;,(atom;((atom;"symbol");(array;0))));(option;,(atom;((atom;"long");(array;0))));(array;0))) .qd.types.parse "innt" //=> Error parsing at line 1, column 0: expected "dict", "fn", "any", "bool", "guid", "byte", "long", "real", "char", "date", "time", "null", "type", "int", "*", "#", "{", "$", "("
The variable .qd.TYPES holds a list of all the supported qdoc types. These types can be widely catgeorized in three categories:
### Primitives
qdoc atomic types include q's primitive atoms and vectors ranging from type ± 1h to 19h.
To build a vector append square brackets at the end of all types expect function and typedef type:
• Array of function: fn [] (long) -> int
• Array of typedef with argument: #myTypedef [] <arg1> <arg2>
// List of supported atoms
boolbooleanbytecharcharacterdatedatetimefloatguidintintegerlongminutemonthrealsecondshortstringsymboltimetimespantimestamp
// List of supported vectors
bool[]boolean[]byte[]char[]character[]date[]datetime[]float[]guid[]int[]integer[]long[]minute[]month[]real[]second[]short[]string[]symbol[]time[]timespan[]timestamp[]
### Compound
qdoc compound types include tuple, table, dictionary, and function, which corresponds to q's type 0, 98, 99 and 100h respectively.
#### Tuple
A tuple/general list is represented by a list of qdoc types wrapped in parens and separated by semicolons.
(QDOC_TYPE; QDOC_TYPE ...)
Examples:
// An empty tuple/general list
()
// A tuple of any type
(#any)
// A tuple of int and char array
(int; char[])
// A tuple of a dictionary, float, symbol and a lambda
(dict (x: long; y: symbol); float; symbol; fn)
// An array of tuple
(guid; symbol)[]
#### Table
A table is represented by the table keyword followed by key-value pairs separated by colons.
Simple Table:
table (COLUMN_NAME: COLUMN_TYPE; COLUMN_NAME: COLUMN_TYPE ...)
Keyed Table:
table ([COLUMN_NAME: COLUMN_TYPE] COLUMN_NAME: COLUMN_TYPE ...)
Examples:
// A table with no column definition
table
// A table with column foo of type int
table (foo: int)
// A table with two columns (square brackets are not necessary here)
table ([] foo: long; bar: fn)
// A table and a dictionary within a table
table ( foo: table (x: int); bar: dict (y: char))
// A keyed table with foo as the key column of type symbol
table ([foo: symbol] bar: int )
// An array of table
table (foo: long; bar: fn)[]
#### Dictionary
A dictionary is represented by the dictionary or dict keyword followed by key-value pairs separated by colons.
dictionary (KEY_NAME: VALUE_TYPE; KEY_NAME: VALUE_TYPE ...)
Examples:
// A dictionary with no definition
dictionary
// A dictionary with key foo of type int
dict (foo: int)
// A dict with two keys
dict (foo: long; bar: fn)
// A table and a dictionary within a dictionary
dict ( foo: table (x: int); bar: dict (y: char))
// An array of dictionary
dict (foo: long; bar: fn)[]
#### Function
A function is represented by the function or fn keyword followed by function parameter and return types. Only monadic functions can documented currently.
function (PARAM_1_TYPE; PARAM_2_TYPE; ...; PARAM_N_TYPE) -> RETURN_TYPE
Examples:
// A function with no definition
function
// A function that takes an integer and returns a long
fn (int) -> long
// A function that takes a dictionary and returns a table
fn (dict(x: symbol)) -> table(x: symbol)
// A function that takes three arguments and returns a guid
fn (long; symbol; char) -> guid
// An array of functions
fn[] (int) -> long[]
### Special
qdoc supports a few special types:
type description
* Represents any type
null Represents any null type
number Represents any numeric type
string Represent a char vector
#hsym Global typedef for file paths
#any Global typedef for any type
## .qd.TAGS
Returns a symbolic list of valid QDoc tags.
## .qd.TYPES
Returns a symbolic list of valid QDoc types.
## .qd.doc
Generates markdown documentation from given q files on disk or a table of items. Default output location for the generated markdown files is ./out/md directory.
Parameters:
Name Type Description
settings .qd.settings Optional Argument. Pass empty dictionary to use default settings
src .qd.src Source of the data to be documented. You can document using q files on disk or a table of items
Returns:
Type Description
.qd.output A dictionary of path to the output directory (MD files), markdown content in each file and the table of errors
Example: Generate qdoc from all the files (\*.q, \*.README and \*.README.md) in the current directory with different settings.
// Default settings
.qd.doc[::] .
// Write the generated markdown files to the specified directory
.qd.doc[(enlist out)!enlist :mydir] .
// Map user defined description and arg tags to qdoc's fileoverview and param tags respectively
.qd.doc[(enlist tagmap)!enlist descriptionarg!fileoverviewparam] .
// Exclude items whose name matches the *.i.* expression
// Option 1: By default it will use like
.qd.doc[(enlist exclude)!enlist "*.i.*"] .
// Option 2: Use like expression
.qd.doc[(enlist exclude)!enlist (like;"*.i.*")] .
// Option 3: Use regex expression
.qd.doc[(enlist exclude)!enlist (regex;".*\\.i\\..*")] .
// Document items marked as private or have todo tags or have no qdoc tags
.qd.doc[docprivatedoctododocempty!111b] .
// Do not search for files recursively
.qd.doc[(enlist recursive)!enlist 0b] .
// Do not write generated markdown files to disk
.qd.doc[(enlist write)!enlist 0b] .
Example: Generate qdoc from files (\*.q, \*.README and \*.README.md) on disk with default settings
// Document all the files in the current directory
.qd.doc[::] .
// Document all the files from the specified directory
.qd.doc[::] $"/dir/to/files" // Document only the specified files .qd.doc[::] :dir/foo.q:foo.q:dir/foo.README.md Example: Generate qdoc from a table of items items: ([] name: .foo.bar.foo.baz.foo.bat.bar.foo.bat.BAZ,$(".foo/README.md"; ".foo.bat/README.md");
kind: functionfunctionfunctiondatafilefile;
content: (
"// @fileoverview Description for .foo.bar\n// @category FOO\n// @subcategory\n// @param x {int}\n// @return {long[]}\n{[x]\n til x\n}";
"// @fileoverview Description for .foo.baz\n// @category FOO\n// @subcategory\n// @param x {int}\n// @return {int[]}\n{[x]\n \"i\"$til x\n}"; "// @fileoverview Description for .foo.bat.bar\n// @category FOO\n// @subcategory BAT\n// @param x {int}\n// @return {byte[]}\n{[x]\n \"x\"$til x\n}";
"// @fileoverview Description for .foo.bat.BAZ\n// @category FOO\n// @subcategory BAT\nrand 100";
"// @kind readme\n// @category FOO\n// @subcategory \n# Foo Module\nThis is a readme for .foo module\n ...";
"// @kind readme\n// @category FOO\n// @subcategory BAT\n# Foo.bat Module\nThis is a readme for .foo.bat module\n ..."
)
);
.qd.doc[::] items
## .qd.docws
Deprecated
This function has been deprecated. You can now right click a repository/module/file in the workspace to generate doc. You can also use .qd.doc` to generate doc manually.
Parameter:
Name Type Description
x .any Any kdb+ type
Throws:
Type Description |
# How do you find the antiderivative of [e^(2x)/(4+e^(4x))]?
Aug 3, 2016
$\frac{1}{4} \arctan \left({e}^{2 x} / 2\right) + C$.
#### Explanation:
Let, $I = \int {e}^{2 x} / \left(4 + {e}^{4 x}\right) \mathrm{dx}$.
We take subst. ${e}^{2 x} = t \Rightarrow {e}^{2 x} \cdot 2 \mathrm{dx} = \mathrm{dt}$.
Therefore, $I = \frac{1}{2} \int \frac{2 \cdot {e}^{2 x} \cdot \mathrm{dx}}{4 + {\left({e}^{2 x}\right)}^{2}}$,
$= \frac{1}{2} \int \frac{\mathrm{dt}}{4 + {t}^{2}} = \frac{1}{2} \cdot \frac{1}{2} \cdot \arctan \left(\frac{t}{2}\right)$.
Hence, $I = \frac{1}{4} \arctan \left({e}^{2 x} / 2\right) + C$. |
## Friday, August 31, 2012
### Publishing Your Dissertation With LaTeX
I published my doctoral dissertation. As I used LaTeX to write it in the first place the publisher agreed to let me do the typesetting myself. This way I believed I would avoid a load of extra work converting it all to MS Word and all the annoyances that come along with it. While I am happy with the result in terms of typesetting quality, doing it all in LaTeX turned out to be quite a messy process that took me almost a year.
### 1. Fundamentals: jurabib hacking
Most of the complexity was hidden in subtle problems I had not foreseen. The main obstacle that consumed most of my time was caused by a decision I made years ago, when I decided to use the jurabib package to manage my references. By the time I was ready to publish the dissertation the package was not actively supported anymore. Instead there was biblatex, which looked very promising but did lack one feature I could not live without (support for archival records is essential for historians). I ended up adapting the official jurabib package to my needs (which were not fully covered by the original package), which took me at least 3 months —working in my spare time after my full-time job. Before that I had only been a user of LaTeX. While I felt comfortable using the higher level API, I had to learn La/TeX from scratch in order to do some more serious stuff. I got there mostly by reading parts of Knuth's TexBook and getting hints from the Internets. What I found out about jurabib was not very encouraging. Some implementation details seemed like a hack to me once I started understanding what was going on. My modifications did not help in any way making it look nicer or more robust. I just tried to coerce it somehow to follow my will. The result is probably not worth sharing but it did what I needed it to do.
### 2. XeLaTeX or LaTex?
I first jumped on to the XeLaTeX train, which seemed to make my life easier because I was able to use the OpenType fonts installed on my system. At same time I had to convert everything into UTF-8 because for some unknown reason I could not get the
\XeTeXinputencoding
switch to work. But the showstopper was that the quality of the typesetting simply did not match the printed books I used as references. The main reason for that is that XeLaTex does not yet support the microtypography features that are available in LaTaX through the microtype package. I reverted back to LaTeX and used fontools' autoinst, which is basically a wrapper around LCDF TypeTools. It turned out to be not hard at all to use my system's OpenType fonts that way.
### 3. Adaptation to my publisher's requirements
My only guidance as to what the result should look like was an example of another book published in the same series and some general information from my publisher. I used that and Mac OS X's Preview program to get a feel for sizes and proportions of the intended result and used What The Font to identify the fonts. It felt weird figuring out the layout that way. I think this awkwardness was due to the publisher's workflow clearly being geared towards a MS Word centric approach, where all the typesetting happens at the publishing house. I had used KOMAscript to typeset the initial version of the text, which I submitted to my university. This turned out to be the next roadblock. There was a stark dissonance between what my publisher demanded and what the author of the KOMA templates deemed acceptable in an effort to educate his users towards his understanding of typesetting. Again I was reluctant to abandon the KOMA packages completely as they also offered much valued functionality in other areas.
My main difficulty was font sizes where I had to do nasty things like this:
\KOMAoption{fontsize}{10.5pt}\makeatletter%hacking to 11pt\def\normalsize{% \@setfontsize\normalsize\@xipt{12.75}% \abovedisplayskip 11\p@ \@plus3\p@ \@minus6\p@ \abovedisplayshortskip \z@ \@plus3\p@ \belowdisplayshortskip 6.5\p@ \@plus3.5\p@ \@minus3\p@ \belowdisplayskip \abovedisplayskip \let\@listi\@listI}\makeatother
I just did not find another solution. If I had had the time, the clean solution might have been to write my own template based on one of the default LaTeX templates instead.
### 4. Indices
Just one thing: Make sure they are ordered correctly even if you have got words/titles with umlauts in them ...
### Conclusion
What does this all mean? Don't do LaTeX? Not at all, quite the opposite! Do it! Apart from the jurabib issue which was really painful, the other points mentioned here were lessons to be learned rather than unsurmountable obstacles. So by all means do it, maybe choose your packages wisely. Using TeX saved me a lot of tedious manual work while preparing indices and managing references and it also gave me professional grade typesetting on top of that. |
# Minimal representation of an AND with two-input NOR
Let $$x_1,x_2,x_3,x_4$$ be boolean variables (i.e $$x_i \in \{0,1\}$$)
Consider $$f(x_1,x_2,x_3,x_4) = x_1 \wedge x_2 \wedge x_3 \wedge x_4$$
I want to write $$f$$ in terms of two-input NOR gates. I.e, $$\mbox{NR}(x,y)= \neg ( x \vee y )$$.
What is the minimum number of NOR gates to write it?
I found that it can be done with 9. Can it be done with less.
Here is more detail on how to do that. Represent a function by its truth table (i.e., a 16-bit vector). To find all function representable by $$k$$ NOR gates, enumerate all functions $$f,g$$ representable by $$i,k-i$$ NOR gates, respectively, and then compute the function $$\neg (f \lor g)$$, for all $$0 \le i < k$$. Representing $$f,g$$ as a 16-bit bitvector, you can compute $$\neg (f \lor g)$$ in two instructions, then you add it to the set of achievable functions. If you represent that set of achievable functions as a bitvector of length $$2^{16}$$, adding a new function to the set can be done in a few instructions. So, each individual step will be very fast.
Overall, this computation should be feasible. In particular, this computation requires at most $$(k-1) \times 2^{16} \times 2^{16} \le 2^{35}$$ simple operations, and you are going to do this 8 times, for a total of at most $$2^{38}$$ simple operations. That should be doable in a few minutes or a few hours on a typical computer. |
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Inconsistency of the usual axioms of set theory
Replies: 71 Last Post: Apr 11, 2009 9:24 PM
Messages: [ Previous | Next ]
David C. Ullrich Posts: 3,522 Registered: 12/13/04
Re: Inconsistency of the usual axioms of set theory
Posted: Feb 19, 2009 7:47 AM
On Wed, 18 Feb 2009 07:42:58 -0800 (PST), Student of Math
<omar.hosseiny@gmail.com> wrote:
>On Feb 17, 2:29 pm, David C. Ullrich <dullr...@sprynet.com> wrote:
>> On Mon, 16 Feb 2009 02:58:04 -0800 (PST), Student of Math
>>
>>
>>
>>
>>
>> <omar.hosse...@gmail.com> wrote:
>> >On Feb 13, 1:27 pm, David C. Ullrich <dullr...@sprynet.com> wrote:
>> >> On Thu, 12 Feb 2009 19:48:04 -0800 (PST), lwal...@lausd.net wrote:
>> >> >On Feb 12, 11:27 am, Student of Math <omar.hosse...@gmail.com> wrote:
>> >> >> I was trying to find the answer for some problems like Goldbach
>> >> >> Conjecture and continuum problem, but I found that the axiomatic
>> >> >> system for which these problems are discussed in it (ZF), is
>> >> >> inconsistent.
>>
>> >> >Let me warn you that those who attempt to prove ZF
>> >> >inconsistent are usually labeled "cranks," "trolls,"
>> >> >and much worse names.
>>
>> >> There's an important distinction that you _insist_ on
>> >> missing, no matter how many times it's pointed out
>> >> to you. The following are very different:
>>
>> >> (i) Attempting to prove ZF inconsistent.
>>
>> >> (ii) Stating that one _has_ proved ZF inconsistent,
>> >> and then ignoring or misunderstanding simple
>> >> explanations of why the proof is simply wrong.
>>
>> >> >Also, threads discussing the
>> >> >consistency of ZF end up being be hundreds of posts
>> >> >in length. Indeed, I'm surprised that this thread
>> >> >doesn't already 100 posts...
>>
>> >> >> ----------------------------------------------
>> >> >> Inconsistency of ZF in first-order logic
>> >> >> Theorem. Suppose Zermelo-Fraenkel set theory (ZF) be a first-order
>> >> >> theory, then it is inconsistent.
>> >> >> Proof.
>> >> >> Whereas the axiom of foundation eliminates sets with an infinite
>> >> >> descending chain each a member of the next [1], we show that the
>> >> >> existence of those sets is postulated by axiom of infinity.
>> >> >> According to the axiom of infinity there exists a set $A$ such
>> >> >> that $\emptyset\in A$ and for all $x$, if $x\in A$, then
>> >> >> $x\cup\{x\}=S(x)\in A$ [1]. Note that $x\in S(x)$, let $B\subset >> >> >> A$ such that $x\in B$ if and only if there is no $y\in A$ with
>> >> >> $S(y)=x$. Assuming that for all given sets the descending chain
>> >> >> each a member of the next is finite, we may arrange our ideas as
>> >> >> follows
>> >> >> $A$ is a set such that $B\subset A$ and for all $x$, if $x\in A$,
>> >> >> then
>> >> >> (i) $S(x)\in A$, and
>> >> >> (ii)$x$ is obtained from an element of $B$ by a finite
>> >> >> step(s).
>> >> >> Where a "step" is to obtain $S(x)$ from $x$. This may be
>> >> >> written more precisely
>> >> >> $$A=\bigcup_{z\in B}{\Omega_{z}}$$
>> >> >> where $\Omega_{z}=\{z,S(z),S(S(z)),\ldots \}$. One may define
>> >> >> $\Omega_{z}$ as a set which satisfies both (i) and (ii), with
>> >> >> $B=\{z\}$.
>> >> >> It follows from the upward Lowenheim-Skolem theorem that it is
>> >> >> not possible to characterize finiteness in first-order logic [2],
>> >> >> hence the part(ii) of the conditions above is not first-order
>> >> >> expressible. Consequently for every first-order expressible set
>> >> >> which its existence is asserted by axiom of infinity (condition (i)),
>> >> >> the condition (ii) does not holds. This implies the existence a set
>> >> >> with an infinite descending chain each a member of the next, a
>>
>> >> >We observe that Hosseiny, like a few other so-called
>> >> >"cranks," is attempting to use Lowenheim-Skolem to
>> >> >prove ZF inconsistent. Those previous proof attempts
>> >> >used _downward_ L-S to construct a countable model
>> >> >of ZF, yet R is uncountable, to claim contradiction.
>>
>> >> >But Hosseiny's proof attempts to use _upward_ L-S to
>> >> >obtain a nonstandard model of N, which he calls A,
>> >> >so then if n is a nonstandard (infinite) natural:
>>
>> >> >n > n-1 > n-2 > n-3 > ...
>>
>> >> >is a infinitely descending chain, once again to
>> >> >claim contradiction and hence ZF inconsistent.
>>
>> >> >I know that the "standard" mathematicians (i.e., the
>> >> >ones who support ZF(C)) have attacked the downward
>> >> >L-S proof, and they'll likely invalidate Hosseiny's
>> >> >proof as well.
>>
>> >> >In the downward proof, it was pointed out that the
>> >> >model may be countable, but from the perspective of
>> >> >that model, R is still uncountable. Similarly, I
>> >> >suspect that the descending chain is infinite, but
>> >> >it is finite from the perspective of this model.
>>
>> >> >The defenders of ZF(C) will probably explain it
>> >> >more eloquently than I can. Soon, there'll be
>> >> >hundreds of posts all showing Hosseiny's why his
>> >> >proof won't be accepted as a proof of ~Con(ZF).
>>
>> >> Erm, the reason it won't be accepted is that it's
>> >> simply _wrong_. Is there some reason incorrect
>> >> proofs _should_ be accepted?
>>
>> >> There's at least one major misunderstanding
>> >> that jumps out:
>>
>> >> "It follows from the upward Lowenheim-Skolem theorem that it is
>> >> not possible to characterize finiteness in first-order logic [2],
>> >> hence the part(ii) of the conditions above is not first-order
>> >> expressible."
>>
>> >> The problem is slightly subtle. It's true, in a sense, that
>>
>> >> (a) LS shows that it's impossible to characterize finiteness in
>> >> first-order logic.
>>
>> >> It's certainly true that ZF is a first-order theory. Nonetheless
>> >> it's also true that
>>
>> >> (b) "S is finite" is expressible in the language of ZF.
>>
>> >> The reason that (a) and (b) do not contradict each other
>> >> has to do with the disctinction between "finite" and
>> >> "finite according to this model". Much like the explanation
>> >> must have a countable model, although ZF proves
>> >> that there are more than countably many sets.
>>
>> >> Hmm. Come to think of it, looking at what you wrote
>> >> you seem to have realized all by yourself that this is
>> >> a problem. So what's your point?
>>
>> >> David C. Ullrich
>>
>> >> "Understanding Godel isn't about following his formal proof.
>> >> That would make a mockery of everything Godel was up to."
>> >> in sci.logic.)- Hide quoted text -
>>
>> >> - Show quoted text -
>>
>> >It does not concern my argument that "finiteness is expressible in
>> >ZF",
>>
>> Huh? It certainly does matter. You said
>>
>> (ii)$x$ is obtained from an element of $B$ by a finite
>> step(s).
>>
>> and then in explaining why this supposedly led to a
>>
>> "It follows from the upward Lowenheim-Skolem theorem that it is
>> not possible to characterize finiteness in first-order logic [2],
>> hence the part(ii) of the conditions above is not first-order
>> expressible."
>>
>> Condition (ii) _is_ expressible in ZFC, hence your argument
>> is simply wrong.
>>
>> >as if ZF may be inconsistent then it is possible to express
>> >every thing in it.
>>
>> That's simply not so.
>>
>> >If you have proven that ZF is consistent.
>>
>> That's not a sentence, so I'm not sure what the appropriate
>> reply is. But I certainly have not proven that ZFC is
>> consistent - I never claimed to have done so, and
>> nothing I've said requires that I have done so.
>>
>> >As I said before, you have assumed a sentence S be true and the you
>> >discuss if it may be true or false.
>>
>> Yes, you've said this before. That doesn't make it true.
>>
>> David C. Ullrich
>>
>> "Understanding Godel isn't about following his formal proof.
>> That would make a mockery of everything Godel was up to."
>> in sci.logic.)- Hide quoted text -
>>
>> - Show quoted text -
>Hello,
>I can explain my ideas as more as you want.
You'd be better off trying to understand the explanations
of why your proof is wrong.
>Is it possible to express finiteness in ZF?
Yes.
>Yes, because N (the set of natural numbers) is expressible in ZF.
No, we could express finiteness even without N.
>Why the set N is expressible in ZF?
>Because of the axioms of ZF.
>
>the axiom of infinity states
>
>There exists a set A such that 0 is in A and for all x, if x is in A,
>then
>xU{x}=S(x) is in A.
>
>I showed that the set A which its exsitence is asserted by axiom of
>infinity, and
>it has no elements with an infinite descending chain each a member of
>the next,
>is of the form ($$A=\bigcup_{z\in B}{\Omega_{z}}$$), which is not
>first-order
>expressible. Hence, to establish ZF, you have to assume the existence
>a set
>which is not first-order expressible. This contradicts the assumption
>that ZF is
>a first-order theory.
>
>If you would like to have ZF as a first-order theory, then the set A
>which its
>existence is asserted by axiom of infinity has an infinite descending
>chain each
>a member of the next, a contradiction.
>
>Regards,
>Omar hosseiny
David C. Ullrich
"Understanding Godel isn't about following his formal proof.
That would make a mockery of everything Godel was up to."
in sci.logic.)
Date Subject Author
2/12/09 Student of Math
2/12/09 lwalke3@lausd.net
2/13/09 David C. Ullrich
2/13/09 Student of Math
2/13/09 J. Antonio Perez M.
2/16/09 amy666
2/20/09 hagman
2/16/09 Student of Math
2/17/09 David C. Ullrich
2/18/09 Student of Math
2/18/09 Jack Markan
2/19/09 David C. Ullrich
2/19/09 amy666
2/20/09 David C. Ullrich
2/22/09 amy666
2/22/09 Tim Little
2/27/09 David Libert
2/23/09 David C. Ullrich
2/24/09 Student of Math
2/24/09 Jack Markan
3/20/09 lwalke3@lausd.net
3/20/09 Jesse F. Hughes
3/21/09 lwalke3@lausd.net
4/11/09 Student of Math
4/11/09 Tim Little
2/21/09 Student of Math
2/21/09 Mariano
2/22/09 David C. Ullrich
2/22/09 Student of Math
2/23/09 David C. Ullrich
2/22/09 Student of Math
2/22/09 Tim Little
2/23/09 David C. Ullrich
2/24/09 Student of Math
2/24/09 David C. Ullrich
2/24/09 Jack Markan
2/25/09 Tim Little
2/22/09 J. Antonio Perez M.
2/22/09 Mariano
2/23/09 Jack Markan
2/23/09 Dave Seaman
2/24/09 Student of Math
2/24/09 J. Antonio Perez M.
2/24/09 David C. Ullrich
2/24/09 Dave Seaman
2/26/09 David R Tribble
2/16/09 Student of Math
2/16/09 Aatu Koskensilta
2/16/09 Mariano
2/16/09 victor_meldrew_666@yahoo.co.uk
2/16/09 amy666
2/22/09 Student of Math
2/22/09 amy666
2/22/09 Tim Little
2/24/09 amy666
2/24/09 Jesse F. Hughes
2/25/09 amy666
2/25/09 Tim Little
2/26/09 amy666
2/27/09 Jack Markan
2/26/09 David C. Ullrich
2/23/09 David C. Ullrich
2/23/09 Jesse F. Hughes
2/25/09 Aatu Koskensilta
2/25/09 Jesse F. Hughes
2/26/09 David C. Ullrich
3/19/09 amy666
3/19/09 Jack Markan
3/20/09 David C. Ullrich
2/23/09 David C. Ullrich
2/23/09 Jack Markan
2/22/09 David R Tribble |
# ZkConfigManager filesystem separator fix for Windows
## Details
• Type: Bug
• Status: Closed
• Priority: Major
• Resolution: Fixed
• Affects Version/s: None
• Fix Version/s:
• Component/s: None
• Labels:
None
## Description
The separator for zk nodes is '/'. However, on Windows, while uploading the relative files nested within a directory (e.g. velocity\hit-plain.vm) contain '\'. This, apart from causing an inconsistency as compared to zk on posix systems, messed up the ZkCLITest on Windows where count of files in zk and files in the filesystem was compared.
## Attachments
1. SOLR-7158.patch
2 kB
2. SOLR-7158.patch
2 kB
Alan Woodward
## Activity
Hide
Attaching a patch to fix this; should fix the jenkins failure for the ZkCLITest.testUpConfigLinkConfigClearZk().
Show
Ishan Chattopadhyaya added a comment - Attaching a patch to fix this; should fix the jenkins failure for the ZkCLITest.testUpConfigLinkConfigClearZk().
Hide
Alan Woodward added a comment -
Thanks Ishan!
Show
Hide
Thanks for reviewing, Alan.
The first createDirectories() (on the top) creates the base directory, e.g. c:\users\ishan\configs.
The one I added creates the directories for nested files in the ZK configs, e.g. for a file in zk: velocity\plain-hit.vm it creates c:\users\ishan\configs\velocity.
I think the first one (on top) could be removed, since the latter one will recursively create the base directory as well (something like mkdir -p).
Show
Ishan Chattopadhyaya added a comment - Thanks for reviewing, Alan. The first createDirectories() (on the top) creates the base directory, e.g. c:\users\ishan\configs. The one I added creates the directories for nested files in the ZK configs, e.g. for a file in zk: velocity\plain-hit.vm it creates c:\users\ishan\configs\velocity. I think the first one (on top) could be removed, since the latter one will recursively create the base directory as well (something like mkdir -p).
Hide
Alan Woodward added a comment -
Right, but downloadFromZk is called recursively for every subdirectory, so the createDirectories() at the top should be called for each directory that needs to be constructed. Unless I'm missing something?
Show
Alan Woodward added a comment - Right, but downloadFromZk is called recursively for every subdirectory, so the createDirectories() at the top should be called for each directory that needs to be constructed. Unless I'm missing something?
Hide
Ah, I see what you mean! Actually, because of the second issue with the path separator, something like velocity\plain-hit.vm was not being considered as a subpath, and thus the Files.write() was attempting to create a single file with that name (velocity\plain-hit.vm), which failed. Hence, I tried to explicitly create the sub directory (velocity) by calling createDirectories(filename.getParent()), which in view of the path separator fix seems redundant. Thanks for the catch!
Show
Ishan Chattopadhyaya added a comment - Ah, I see what you mean! Actually, because of the second issue with the path separator, something like velocity\plain-hit.vm was not being considered as a subpath, and thus the Files.write() was attempting to create a single file with that name (velocity\plain-hit.vm), which failed. Hence, I tried to explicitly create the sub directory (velocity) by calling createDirectories(filename.getParent()), which in view of the path separator fix seems redundant. Thanks for the catch!
Hide
Alan Woodward added a comment -
Here's an alternative patch (with Windows path shenanigans extracted to a helper method). Could you test this on a Windows box? Thanks!
Show
Alan Woodward added a comment - Here's an alternative patch (with Windows path shenanigans extracted to a helper method). Could you test this on a Windows box? Thanks!
Hide
I like the path separator logic being exctracted into a helper method. Just checked on Windows, it works fine!
A quick lookup at http://en.wikipedia.org/wiki/Path_%28computing%29#Representations_of_paths_by_operating_system_and_shell convinces me that we can actually hardcode "\" for the check for separator (as you've done here). That might also make it fine (more performant?) to replace Pattern.quote(separator) with "\\\\"?
Show
Ishan Chattopadhyaya added a comment - - edited I like the path separator logic being exctracted into a helper method. Just checked on Windows, it works fine! A quick lookup at http://en.wikipedia.org/wiki/Path_%28computing%29#Representations_of_paths_by_operating_system_and_shell convinces me that we can actually hardcode "\" for the check for separator (as you've done here). That might also make it fine (more performant?) to replace Pattern.quote(separator) with "\\\\"?
Hide
Just a thought, instead of createZkNode (which gives the impression that it actually creates a zk node), can we rename the helper method to getZkNodeName/createZkNodeName or something similar?
Show
Ishan Chattopadhyaya added a comment - Just a thought, instead of createZkNode (which gives the impression that it actually creates a zk node), can we rename the helper method to getZkNodeName/createZkNodeName or something similar?
Hide
Alan Woodward added a comment -
I've incorporated both suggestions. Thanks again!
Show
Alan Woodward added a comment - I've incorporated both suggestions. Thanks again!
Hide
ASF subversion and git services added a comment -
Commit 1662205 from Alan Woodward in branch 'dev/trunk'
[ https://svn.apache.org/r1662205 ]
SOLR-7158: Fix zk upload on Windows systems
Show
ASF subversion and git services added a comment - Commit 1662205 from Alan Woodward in branch 'dev/trunk' [ https://svn.apache.org/r1662205 ] SOLR-7158 : Fix zk upload on Windows systems
Hide
ASF subversion and git services added a comment -
Commit 1662206 from Alan Woodward in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1662206 ]
SOLR-7158: Fix zk upload on Windows systems
Show
ASF subversion and git services added a comment - Commit 1662206 from Alan Woodward in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1662206 ] SOLR-7158 : Fix zk upload on Windows systems
Hide
Timothy Potter added a comment -
Bulk close after 5.1 release
Show
Timothy Potter added a comment - Bulk close after 5.1 release
## People
• Assignee:
Alan Woodward
Reporter: |
# The difference between EST of succeeding activity and EFT of the activity under consideration is called
This question was previously asked in
MPSC AE CE Mains 2018 Official (Paper 1)
View all MPSC AE Papers >
1. Total float
2. Independent float
3. Interfering float
4. Free float
Option 4 :
Free float
Free
Building Material & Concrete Technology
15599
20 Questions 20 Marks 25 Mins
## Detailed Solution
Explanation:
Early Start Time:
• The earliest point in the schedule at which a task can begin.
• EST = $$T_E^i$$
Early Finish Time:
• The earliest point in the schedule at which a task can finish.
• EFT = $$T_E^i$$ + tij
Latest Start Time:
• The latest point in the schedule at which a task can start without causing a delay.
• LST = $$T_L^j$$ - tij
Latest Finish Time:
• The latest point in the schedule at which a task can finish without causing a delay.
• LFT = $$T_L^j$$
Types of float:
1) Total float:
• It is the difference between the maximum time available and the actual time required for the completion of the activity.
• FT = LST - EST = LFT - EFT = $$T_L^j - T_E^i - t_e^{ij}$$
2) Free float:
• It is the amount of time by which an activity can be delayed without affecting the EST of the succeeding activity.
• FT = $$T_E^j - T_E^i - t_e^{ij}$$
3) Independent float:
• It is the excess of minimum available time over the required activity duration
• FI = $$T_E^j - T_L^i - t_e^{ij}$$
4) Interfering float
• It is the difference between the total float and the free float of an activity.
• FIN = FT - FF |
# Definition:Derivative of Smooth Path/Real Cartesian Space
## Definition
Let $\R^n$ be a real cartesian space of $n$ dimensions.
Let $\left[{a \,.\,.\, b}\right]$ be a closed real interval.
Let $\rho: \left[{a \,.\,.\, b}\right] \to \R^n$ be a smooth path in $\R^n$.
For each $k \in \left\{ {1, 2, \ldots, n}\right\}$, define the real function $\rho_k: \left[{a \,.\,.\, b}\right] \to \R$ by:
$\forall t \in \left[{a \,.\,.\, b}\right]: \rho_k \left({t}\right) = \pr_k \left({\rho \left({t}\right)}\right)$
where $\pr_k$ denotes the $k$th projection from the image $\operatorname{Im} \left({\rho}\right)$ of $\rho$ to $\R$.
It follows from the definition of a smooth path that $\rho_k$ is continuously differentiable for all $k$.
Let $\rho_k' \left({t}\right)$ denote the derivative of $\rho_k$ with respect to $t$.
The derivative of $\rho$ is the continuous vector-valued function $\rho': \left[{a \,.\,.\, b}\right] \to \R^n$ defined by:
$\forall t \in \left[{a \,.\,.\, b}\right]: \rho' \left({t}\right) = \displaystyle \sum_{k \mathop = 1}^n \rho_k' \left({t}\right) \mathbf e_k$
where $\left({\mathbf e_1, \mathbf e_2, \ldots, \mathbf e_n}\right)$ denotes the standard ordered basis of $\R^n$. |
How do you evaluate 28+6*4-2?
Jun 4, 2015
Sequence of arithmetic operations:
1. Anything inside parentheses
2. Exponentiation
3. Multiplication and Division, Left to Right
4. Addition and Subtraction, Left to Right
Given $28 + 6 \cdot 4 - 2$
There are no parentheses
Multiplication
$\textcolor{w h i t e}{\text{XXXX}}$$28 + 6 \cdot 4 - 2 = 28 + 24 - 2$
$\textcolor{w h i t e}{\text{XXXX}}$Left to Right
$\textcolor{w h i t e}{\text{XXXX}}$$\textcolor{w h i t e}{\text{XXXX}}$$28 + 24 - 2 = 52 - 2$
$\textcolor{w h i t e}{\text{XXXX}}$$\textcolor{w h i t e}{\text{XXXX}}$$52 - 2 = 50$
$28 + 6 \cdot 4 - 2 = 50$ |
# Tensor Product of Hilbert spaces
This question is regarding a definition of Tensor product of Hilbert spaces that I found in Wald's book on QFT in curved space time. Let's first get some notation straight.
Let $(V,+,*)$ denote a set $V$, together with $+$ and $*$ being the addition and multiplication maps on $V$ that satisfy the vector space axioms. We define the complex conjugate multiplication ${\overline *}:{\mathbb C} \times V \to V$ as $$c {\overline *} \Psi = {\overline c} * \Psi,~~\forall~~\Psi \in V$$ The vector space formed by $(V,+,{\overline *})$ is called the complex conjugate vector space and is denoted by ${\overline V}$.
Given two Hilbert spaces ${\cal H}_1$ and ${\cal H}_2$ and a bounded linear map $A: {\cal H}_1 \to {\cal H}_2$, we define the adjoint of this map $A^\dagger: {\cal H}_2 \to {\cal H}_1$ as $$\left< \Psi_2, A \Psi_1 \right>_{{\cal H}_2} = \left< A^\dagger \Psi_2 , \Psi_1 \right>_{{\cal H}_1}$$ where $\left< ~, ~ \right>_{{\cal H}_1}$ is the inner product as defined on ${\cal H}_1$ (similarly for ${\cal H}_2$) and $\Psi_1 \in {\cal H}_1,~\Psi_2 \in {\cal H}_2$. That such map always exists can be proved using the Riesz lemma.
Here the word "bounded" simply means that there exists some $C \in {\mathbb R}$ such that $$\left\| A(\Psi_1) \right\|_{{\cal H}_2} \leq C \left\| \Psi_1 \right\|_{{\cal H}_1}$$ for all $\Psi_1 \in {\cal H}_1$ and where $\left\| ~~ \right\|_{{\cal H}_1}$ is the norm as defined on ${\cal H}_1$ (similarly for ${\cal H}_2$)
Great! Now for the statement. Here it is.
The tensor product, ${\cal H}_1 \otimes {\cal H}_2$, of two Hilbert spaces, ${\cal H}_1$ and ${\cal H}_2$, may be defined as follows. Let $V$ denote the set of linear maps $A: {\overline {\cal H}}_1 \to {\cal H}_2$, which have finite rank, i.e. such that the range of $A$ is a finite dimensional subspace of ${\cal H}_2$. The $V$ has a natural vector space structure. Define the inner product on $V$ by $$\left< A, B \right>_V = \text{tr}\left( A^\dagger B \right)$$ (The right side of the above equation is well defined, since $A^\dagger B: {\overline {\cal H}}_1 \to {\overline {\cal H}}_1$ has a finite rank). We define ${\cal H}_1 \otimes {\cal H}_2$ to be the Hilbert space completion of $V$. It follows that ${\cal H}_1 \otimes {\cal H}_2$ consists of all linear maps $A: {\overline {\cal H}}_1 \to {\cal H}_2$ that satisfy the Hilbert-Schmidt condition $\text{tr}\left( A^\dagger A \right) < \infty$.
My question is
1. How does this definition of the Tensor product of Hilbert spaces match up with the one we are familiar with when dealing with tensors in General relativity?
PS - I also have a similar problem with Wald's definition of a Direct Sum of Hilbert spaces. I have decided to put that into a separate question. If you could answer this one, please consider checking out that one too. It can be found here. Thanks!
-
I figured out the first problem. Removed it from question. – Prahar Jul 10 '13 at 16:42
If we consider only vectors, You will have a problem, because an Hilbert space has a positive definite inner product, while the tangent space of the space-time manifold has a pseudo-Riemaniann metric (Minkowski-like metrics) – Trimok Jul 10 '13 at 18:30
I don't think Wald ever defines a tensor product for infinite dimensional space in his GR text, so I presume your question is about the finite dimensional case where we simply write the tensor product as the vector space over pairs $u_iv_j$ where $u$ and $v$ are a basis. I will show the equivalence in that case.
If we have two finite dimensional Hilbert spaces $H_1$, $H_2$ we can take the orthonormal bases $u_i\in H_1$ , $v_j\in H_2$. Since everything is finite dimensional, everything is finite rank, so the the vector space is just the the space of linear maps from $H_1$ to $H_2$. Take a linear map $A$ and define $a_{ij}= \langle A(u_i),v_j\rangle = \langle u_i,A^{\dagger}(v_j)\rangle$. Using the orthonormality of the bases that means $a_{ij}$ is simply the matrix presentation of $A$, and the vector space is simply the appropriate vector space of matrices. Then we can interpret $Tr(A^\dagger B)$ as the usual matrix trace which gives $\sum_{ij} a_{ij}^*b_{ij}$.
This is equivalent to the usual notation whereby we write tensor products as elements $\sum_{ij}a_{ij}u_i\otimes v_j$. Again the vector space is the appropriately sized matrices. The inner product is defined to be $\langle a\otimes b, c\otimes d\rangle = \langle a,b\rangle\cdot\langle c,d\rangle$. This gives the same result as above after plugging in the basis.
-
This is not from his GR text. This is from his text on QFT in curved spacetimes. – Prahar Jul 10 '13 at 18:25
@Prahar: Yeah I know. It's just that you said you wanted to see the definition was equivalent to the "usual" one in GR, but you didn't specify what the usual one is. So I assume you meant the usual finite dimensional notation, as in his GR book – BebopButUnsteady Jul 10 '13 at 18:28
Oh! sorry. My bad. – Prahar Jul 10 '13 at 18:32
Let there be given a (monoidal) category ${\cal C}$, e.g., the category of finite dimensional vector spaces, the category of Hilbert spaces, etc.
In such a category ${\cal C}$, one typically has the isomorphism
$$\tag{1} {\cal H} \otimes {\cal K} \cong {\cal L}({\cal H}^{*}, {\cal K}),$$
where ${\cal H}^{*}$ is a dual object, and ${\cal L}$ is the pertinent space of morphisms ${\cal H}^{*}\to {\cal K}$.
Often textbooks don't provide the actual definition of a tensor product, which is nevertheless at least partly explained on Wikipedia, but instead cheat by using the isomorphism (1) as a working definition of a tensor product ${\cal H} \otimes {\cal K}$.
- |
Select Page
## How Kindle Helps You Learn a Second Language
I was skeptical about Kindles. I liked the feel of a book. The smell. I didn’t want to read on a small computer screen. No. Well, not until I discovered that my Kindle would help me improve my second language, Spanish. Now I would recommend ANYONE who is...
## Can Discrimination of Non-Native Teachers be Justified for Marketing?
The Native vs Non-Native English teacher debate has been done to death. “The native has a better accent” “but the non-native might understand the learning process better”, blah blah. We know that. There are advantages and disadvantages to both,...
## The Union Jack – Who is Jack?!
“Who is Jack?” This was a question a student asked me when we were talking about flags. The Union Jack is the name of the United Kingdom flag. But who is Jack? I honestly had no idea. So I set out to find the answer. The Union part of the name Union Jack...
## 10 Signs You are 100% Fluent in a Language
Fluent is a relative term. I don’t know if I consider myself fluent in French, even though I’ve got a degree in it and spent 5 months living there. I still make stupid mistakes and I often tell people ‘he’s crying’ when I mean...
## A library where you borrow cats, not books
A government office in New Mexico, USA has a cat library. Employees can go to the cat library, choose a cat and take it back to their desk for one hour. Employees report feeling happier and more productive at work. How did this start? A local cat shelter (a place... |
| English | 简体中文 |
# 1221. Split a String in Balanced Strings
## Description
Balanced strings are those that have an equal quantity of 'L' and 'R' characters.
Given a balanced string s, split it in the maximum amount of balanced strings.
Return the maximum amount of split balanced strings.
Example 1:
Input: s = "RLRRLLRLRL"
Output: 4
Explanation: s can be split into "RL", "RRLL", "RL", "RL", each substring contains same number of 'L' and 'R'.
Example 2:
Input: s = "RLLLLRRRLR"
Output: 3
Explanation: s can be split into "RL", "LLLRRR", "LR", each substring contains same number of 'L' and 'R'.
Example 3:
Input: s = "LLLLRRRR"
Output: 1
Explanation: s can be split into "LLLLRRRR".
Constraints:
• 1 <= s.length <= 1000
• s[i] is either 'L' or 'R'.
• s is a balanced string. |
### How to get the global indexes of all the four vertices of a tetrahedron in C++?
258
views
0
10 months ago by
Hi all, I have encountered a problem while trying to obtain the global indexes of the vertices of a cell (tetrahedron). In order to do this, I have tried to compute these quantities for the first cell (the 0th), and then, using a function equal to zero everywhere, I marked the vertices value of this function where the vertices should be. The code should be more expressing:
// create mesh Th (unit cube) and function space Vh (P1, continuous Lagrange)
// create the function (P1, continuous Lagrange)
auto fun = std::make_shared<Function>(Vh);
// create an auxiliary vector containing the vertices values of the above function
std::vector<double> funvect(Th->num_vertices(),0.0);
// set f(x) = 0
(fun->vector())->set_local(funvect);
// initialize 0-dim entities. Tried also with init(0)
Th->init_global(0);
// create the first cell (0th cell)
Cell myCell(*Th, 0);
// create an ufc::cell in order to get the Cell topology
ufc::cell c1;
// get cell topology
myCell.get_cell_topology(c1);
// get the GLOBAL INDEXES of the 4 vertices
std::vector<long unsigned int> vertex= c1.entity_indices[0];
// mark with 1000 the vertices
for(int ee=0; ee<vertex.size(); ee++) {
funvect[vertex[ee]] = 1000;
}
// reset the function
(fun->vector())->set_local(funvect);
// save the results
File file1("function.pvd");
file1<<*fun;
When I visualize the function, the vertices marked are four, but they are scattered in the whole domain without any criterion! I expect to see four vertices marked on a single cell.
There could be several problems:
1) the ufc::cell global vertices indexes are different from the dolfin::Mesh ones.
2) the ufc::cell global vertices indexes are different from those of dolfin::Function::vector()
3) the line "(fun.vector())->set_local(funvect);" does not put the values in the right order. But this was ensured in this topic: https://www.allanswered.com/post/rwwv/how-to-extract-the-coefficients-of-a-dolphinfunction-in-c/ , and furthermore I have used this procedure in other problems, and it worked fine.
Community: FEniCS Project
I have found this documentation about this issue, it can explain this (it is for Python but I think it is fine also for C++): https://github.com/mikaem/fenicstools/wiki/DofMap-plotter
written 10 months ago by Francesco Clerici |
# Replace the “Ibid.” string with string “Ivi” when the same reference is cited at a different page
I'm using biblatex with ext-verbose-trad1 style in memoir.
I have to use the abbreviation "ivi" when subsequently citing the same reference at a different page, while maintain the usual "ibid." abbreviation when citing the same reference at the same page.
The question is also addressed in tex.stackexchange.com/q/418701/35864, where a patch is suggested.
As the following MWE demonstrates, using ext-verbose-trad1 style the patch suggested works perfectly:
\documentclass[12pt, a4paper]{memoir}
\usepackage[italian]{babel}
%patch to use ibid and ivi
\usepackage{xpatch}
\NewBibliographyString{ibidemloccit,ibidemnoloccit}
\DefineBibliographyStrings{italian}{%
idem = {\autocap{i}d},
ibidemnoloccit = {\mkbibemph{\autocap{i}vi}},
}
\xpatchbibmacro{author}
{\printnames{author}}
{\iffootnote
{\ifthenelse{\ifciteidem\AND\NOT\boolean{cbx:noidem}}
{\usebibmacro{cite:idem}}
{\printnames{author}}}
{\printnames{author}}}
{}{}
\xpatchbibmacro{bbx:editor}
{\printnames{editor}}
{\iffootnote
{\ifthenelse{\ifciteidem\AND\NOT\boolean{cbx:noidem}}
{\usebibmacro{cite:idem}}
{\printnames{editor}}}
{\printnames{editor}}}
{}{}
\xpatchbibmacro{bbx:translator}
{\printnames{translator}}
{\iffootnote
{\ifthenelse{\ifciteidem\AND\NOT\boolean{cbx:noidem}}
{\usebibmacro{cite:idem}}
{\printnames{translator}}}
{\printnames{translator}}}
{}{}
\renewbibmacro*{cite:ibid}{%
\printtext{%
\ifloccit
{\bibstring[\mkibid]{ibidemloccit}%
\global\toggletrue{cbx:loccit}}
{\bibstring[\mkibid]{ibidemnoloccit}}}}}
\begin{document}
Lorem \footcite{aristotle:anima}
Lorem \footcite[14]{aristotle:anima}
Lorem \footcite[198]{aristotle:anima}
ipsum \footcite[198]{aristotle:anima}
\printbibliography
\end{document}
giving the output:
When I use the style ext-verbose-trad2 it gives another result. The last citation is a simple repetition of the previous, while should appear the string 'Ibid':
• For questions like this it is a very, very good idea to include a short example document that produces output as shown in the first screenshot. Then we can all be sure that we are talking about the same thing. It also helps those willing to help you get started more quickly (because they don't have to rebuild on their own what you have already). Such an example document is often called MWE (tex.meta.stackexchange.com/q/228/35864). – moewe Aug 28 '20 at 12:56
• I edited the desired MWE – Paolo Polesana Aug 28 '20 at 14:36
• The MWE and the screenshot don't seem to match exactly. There are no page references in the code of the MWE. – moewe Aug 28 '20 at 15:27
• Does tex.stackexchange.com/q/418701/35864 help? – moewe Aug 28 '20 at 15:27
• Yes, it helps quite a lot. Thanks! – Paolo Polesana Aug 28 '20 at 15:46
The answer from the linked question still works for (ext-)verbose-trad2. You were just missing the most important ingredient of the answer (which is mentioned in the first sentence): You need to set the option ibidpage=true,.
\documentclass[12pt, a4paper]{memoir}
\usepackage[italian]{babel}
%patch to use ibid and ivi
\usepackage{xpatch}
\NewBibliographyString{ibidemloccit,ibidemnoloccit}
\DefineBibliographyStrings{italian}{%
idem = {\autocap{i}d},
ibidemnoloccit = {\mkbibemph{\autocap{i}vi}},
}
\xpatchbibmacro{author}
{\printnames{author}}
{\iffootnote
{\ifthenelse{\ifciteidem\AND\NOT\boolean{cbx:noidem}}
{\usebibmacro{cite:idem}}
{\printnames{author}}}
{\printnames{author}}}
{}{}
\xpatchbibmacro{bbx:editor}
{\printnames{editor}}
{\iffootnote
{\ifthenelse{\ifciteidem\AND\NOT\boolean{cbx:noidem}}
{\usebibmacro{cite:idem}}
{\printnames{editor}}}
{\printnames{editor}}}
{}{}
\xpatchbibmacro{bbx:translator}
{\printnames{translator}}
{\iffootnote
{\ifthenelse{\ifciteidem\AND\NOT\boolean{cbx:noidem}}
{\usebibmacro{cite:idem}}
{\printnames{translator}}}
{\printnames{translator}}}
{}{}
\renewbibmacro*{cite:ibid}{%
\printtext{%
\ifloccit
{\bibstring[\mkibid]{ibidemloccit}%
\global\toggletrue{cbx:loccit}}
{\bibstring[\mkibid]{ibidemnoloccit}}}}} |
## A community for students. Sign up today
Here's the question you clicked on:
## anonymous 4 years ago hola,,, question... can I use the rational root test to proof that the cubic root of 2 and square root 3 are irrationals?
• This Question is Closed
1. anonymous
sure if you consider the polynomial $x^3-2$ the rational roots can only be $\pm 1,\pm2$ and by inspection neither of those work.
2. anonymous
cool.. yea i did that... thanks satellite
3. anonymous
yw
4. anonymous
i guess if you want a real proof you have to know that the root exists, but that is clear because this is a continuous function that takes on both positive and negative values
5. anonymous
waitt
6. anonymous
they are really asking me prove in detail
7. anonymous
what u mean the root exist?
#### Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy |
CAVITY OF CREATION FOR COLD FUSION AND GENERATION OF HEAT
• Oh, Hung-Kuk (School of Mechanical and Industrial Engineering, Ajou University)
• Published : 1996.10.01
Abstract
Cold fusion technologies now are being developed very successfully. The $\pi$-far infrared rays are generated from three dimensional crystallizing $\pi$-bondings of oxygen atoms in water molecules. The growing cavity in water molecules make near resonance state and a vortex of infrared rays and attracts $\pi$-far infrared rays in the water. The cavity surrounded by a lot of $\pi$-far infrared rays has a very strong gravitational field. The $\pi$-far infrared rays are contracted into $\pi$-far infrared rays of half wave length and of one wave length. The $\pi$-far infrared rays of half wave length generate heat while $\pi$-far infrared rays of one wave length are contracted into $\pi$-gamma rays of one wave length. The contracted $\pi$-gamma rays of one wave length make nucleons and mesons, which is the creation and transmutation of matter by covalent bondings and three-dimensional crystallizing $\pi$-bondings into implosion bonding. Patterson power cell generates a very strong gravitational cavity because the electrolysized oxygen atoms make $\pi$-far infrared rays than in plain water. |